Projects: Linear Algebra
Role on Project: Instructor, Subject Matter Expert
Position Title: Professor, Mathematics
Department: Department of Mathematics
Institution: University of Toronto
From this author
Title of Resource | Part of | Description |
---|---|---|
Lecture 10: Matrix Addition, Scalar Multiplication, Transposition (Nicholson Section 2.1) | Link | Linear Algebra Lecture Videos |
Alternate Video Access via MyMedia | Video Duration: 21:02 |
Description: Up to now, matrices have been used as a form of short-hand for solving systems of linear equations. Now we’re going to start doing algebra with matrices --- adding matrices, multiplying matrices, and so forth. To do this, I started by introducing the language of matrices in terms of entries. Defined square matrices, upper triangular matrices, lower triangular matrices, diagonal matrices.
15:00 --- how to add matrices, how to multiply a matrix by a scalar (i.e. by a real number in this class).
20:00 --- defined the transpose of a matrix and discussed its properties.
|
Lecture 11: Matrix Transformations (Nicholson Section 2.2) | Link | Linear Algebra Lecture Videos |
Alternate Video Access via MyMedia | Video Duration: 28:32
Description: Warning: the lectures came from a book that referred to “matrix mappings”, “linear mappings”, and “linear operators”. Nicholson’s book refers to “matrix transformations”, “linear transformations” and “geometrical transformations”. So whenever I say the word “mapping” you should think “transformation”. Started matrix mappings w/ a review of language from high school: function, domain, range, etc. Introduced language of “codomain”.
6:55 --- given an mxn matrix A, define the matrix transformation (aka matrix mapping) T_A (aka f_A).
12:50 --- Did two examples of matrix transformations where the matrices A are both 2x2 matrices.
15:30 --- Showed how to represent the matrix transformation graphically via “before” and “after pictures”. Note that from the graphic representation it’s clear that T_A(x+y) = T_A(x) + T_A(y).
21:00 --- For the first example, does it appear that the matrix mapping is onto R^2?
21:35 --- Did second example, discussing its graphic representation. It’s clearly not going to be onto R^2.
|
Lecture 12: Introduction to Matrix Multiplication (Nicholson Section 2.3) | Link | Linear Algebra Lecture Videos |
Alternate Video Access via MyMedia | Video Duration: 13:46
Description: Started with a review of an earlier example that motivated matrix-vector multiplication.
3:50 -- Introduced matrix multiplication. Just because AB is defined, doesn’t mean that BA is defined.
4:50 --- And even if they’re both defined, it doesn’t mean that AB=BA. AB and BA might not even be the same size.
9:15 --- In general, matrix multiplication doesn’t commute --- the order matters! Even if AB and BA are the same size.
11:00 --- Did an example showing what happens when you multiply a matrix by a diagonal matrix. This is an important example to remember.
12:10 --- For real numbers you know that ab = ac implies b=c only if a is nonzero. Similarly, if AB=AC this doesn’t always imply that B=C. But it's even trickier --- it's not just a matter of having to worry about the possibility that A might be an all-zero matrix. Gave an example of a nonzero matrix A such that AB=AC but B doesn’t equal C.
|
Lecture 13: Introduction to Matrix Inverses (Nicholson Section 2.4) | Link | Linear Algebra Lecture Videos |
Alternate Video Access via MyMedia | Video Duration: 47:53
Description: Started by reminding students that Ax=b will have a) no solution, b) exactly one solution, or c) infinitely many solutions and discussed what this had to do with the Column Space of A and rank(A). If you don’t know what the Column Space of A is yet, ignore that part!
4:40 --- one option to trying to solve Ax=b is via elementary row operations. Discussed the costs & benefits of this approach.
6:20 --- another option is to find a matrix B (if it can be found) so that AB = I (the identity matrix) and use B to find the solution x. Discussed the costs & benefits of this approach.
12:50 --- when is it better to use elementary row operations to try and solve Ax=b and when is it better to try and find B so that BA=I?
16:00 --- does every square matrix have some matrix so that BA = I? Gave an example of a 2x2 matrix for which there is no B so that BA=I --- presented two different arguments as to why there could never be a B so that BA=I.
25:00 --- presented a super-important and super-useful theorem about matrix inverses.
31:20 --- used theorem to construct an algorithm to try and find a matrix B so that BA=I. Note: the algorithm is using block multiplication. Specifically, if A is 2x2 and B is 2x2 with B = [b1;b2] then AB = A [b1; b2] = [Ab1;A b2] = [e1; e2] = I. Make sure that you’re comfortable with this block multiplication!
37:45 --- Used the matrix inversion algorithm on a 2x2 matrix for which there is a B so that BA=I.
40:50 --- Used the matrix inversion algorithm on a 2x2 matrix for which there isn’t a B so that BA=I.
43:30 --- bird’s eye view of matrix inversion algorithm.
46:20 --- Defined what it means for a square matrix to be invertible.
|
Lecture 14: Properties of Inverse Matrices, Invertible Matrix Theorem (Nicholson Section 2.4) | Link | Linear Algebra Lecture Videos |
Alternate Video Access via MyMedia | Video Duration: 47:08
Description: Halloween lecture: instructor was mugged and replaced by angel. Started with the definition of “invertible matrix”. Reviewed the super-useful theorem which the matrix inversion algorithm is based on.
2:00 --- simple example: when is a diagonal matrix invertible?
5:50 --- three 3x3 matrices --- are the invertible? Note that if the first matrix is A then the second matrix is 2A and the third matrix is A^T. From the example, we suspect that if t is nonzero then (t A)^{-1} = 1/t A^{-1} and if A is invertible then (A^T)^{-1} = (A^{-1})^T.
11:45 --- stated theorem to this effect. The proof of the theorem uses the super-important theorem: if you can find C so that AC = I then voila --- you’re done --- you’ve shown that A is invertible, you’ve found the inverse of A, and you get to write A^{-1} = C. (You don’t even get to write A^{-1} until you’ve shown A is invertible.) Basically, if you can find a matrix that does the job that an inverse should do then you’ve found the inverse.
28:00 --- did a classic exam question: if A is a square matrix such that A^2 – A = 2 I, find A^{-1}.
31:45 --- Inversion does not “play nice” with matrix addition.
38:40 --- The Invertible Matrix Theorem. Given a square matrix, presented 6 equivalent statements. If any one of them is true then all of them are true and it follows that the matrix is invertible. The point: some of those statements are pretty easy to check! Note: item 5 is that the columns being linearly independent is something that you haven’t learnt about yet --- ignore this item for the moment. Item 6 is that the Column Space of A is R^n --- you haven’t learnt about the Column Space of a matrix --- ignore this item for the moment.
|
Lecture 15: Introduction to Linear Transformations (Nicholson Section 2.6) | Link | Linear Algebra Lecture Videos |
Alternate Video Access via MyMedia | Video Duration: 13:38
Description: If T_A is a matrix transformation from R^2 to R^2 and I tell you what T_A does to the vector [1;0] and what it does to the vector [0;1] can you use this to figure out what T_A does to any vector x = [x1;x2] in R^2?
6:00 --- how matrix transformations act on the sum of two vectors, how matrix transformations act on the scalar multiple of a vector.
7:45 --- I define “linear transformation” (aka “linear mapping”). Example of a transformation from R^2 to R^2 which is not a linear transformation. Here's a nice video on Linear Transformations and how to present linear transformations using matrices (i.e. as a matrix transformation) by 3Blue1Brown. It’s really worth watching even if we can’t present the graphics this way in class, in the book, or on the exams…
|
Lecture 16: Geometric examples of Linear Transformations (Nicholson Section 2.6/Section 4.4) | Link | Linear Algebra Lecture Videos |
Alternate Video Access via MyMedia | Video Duration: 48:48
Description: Started by reviewing definition of linear transformations. Review of the “before” and “after” presentation of what a transformation from R^2 to R^2 does. (There’s nothing special about this before and after way of presenting what a transformation does --- it’s just that it’s easiest to draw when the domain and codomain are in R^2 because I can draw everything easily rather than trying to draw things in R^3 or R^5 or something…)
4:25 --- In terms of this “before” and “after” presentation, what does it mean for T(x+ y) = T(x) + T(y)?
7:49 --- In terms of this “before” and “after” presentation, what does it mean for T(r x) = r T(x)?
8:43 --- Is the mapping T([x1;x2]) = [1;x2] a linear transformation? We can see pictorially that it isn’t. Separately, we can check that T([2;0]+[0;2]) doesn’t equal T([2;0]) + T([0;2]). (To show that something isn’t a linear transformation you just have to give a single example where it breaks one of the rules.)
11:56 --- started geometric transformations. First example: dilation. Proved it’s a linear transformation.
19:55 Can I represent dilation as a matrix mapping? Yes, but need to be careful --- what matrix you get depends on what basis you use for the domain. (One thing that’s confusing/important is that when I defined dilation, it was done w/o referring to any specific basis --- it was defined simply as “given a vector in R^2, double its length”. I didn’t need to refer to the coordinates of the vector --- the moment I refer to the coordinates of the vector I’ve implicitly chosen a basis. For example, if I’m in matlab and I write “x =[2;3]” then implicit in this is the standard basis and what I mean is “x = 2 [1;0] + 3 [0;1] = 2 e1 + 3 e2”.)
32:08 --- Second example: “silly putty” transformation. Note: represented this transformation requires some sort of coordinates because it does one thing in one direction and nothing in another. And so I define it directly in terms of coordinates (I’ve implicitly chosen a basis like {[1;0],[0;1]}). Checked that the transformation is linear and represented it as a matrix transformation.
40:10 --- Third example: shear transformation. Represented it using coordinates (you can check on your own that it’s a linear transformation) and represented it as a matrix transformation.
|
Lecture 17: Representing Linear Transformations as Matrix Transformations (Nicholson Section 2.6/Section 4.4) | Link | Linear Algebra Lecture Videos |
Alternate Video Access via MyMedia | Video Duration: 50:39
Description: Started with a discussion of language mapping/transformation/operator.
2:00 --- Revisited shear mappings. Shear in x direction: shear to the right versus shear to the left. Shear in y direction: shear up versus shear down.
13:37 --- Does shearing change area?
16:35 --- The linear transformation T(x) = Proj_[2;1]( x) . Discussed domain, codomain, range, vectors that’re sent to the zero vector by the linear transformation, vectors that’re unchanged by the transformation. All of this was done intuitively; want to do it rigorously.
37:42 --- The transformation that corresponds to reflecting a vector in a given line. Referred students to the book on how to understand this transformation in terms of projections, how to represent it as a matrix transformation and so forth. (Basically, you need to do all the stuff that was done in 16:35-37:41 but for this new transformation.)
39:16 --- The transformation that corresponds to rotating a vector counter-clockwise by a given angle.
47:07 --- Composition of geometric transformation --- how to find a matrix transformation that represents the composition of three geometric transformations. (Note: the same logic would apply for the composition of any number of geometric transformations, not just three. And it’s not limited to geometric transformations; it works for linear transformations in general.)
|
Lecture 18: Composition of Linear Transformations (Nicholson Section 2.6/Section 4.4) | Link | Linear Algebra Lecture Videos |
Alternate Video Access via MyMedia | Video Duration: 25:01
Description: Started with the composition of two geometrical transformations. How to find the matrix transformation that represents the composition of two linear transformations.
13:00 --- is rotating and then shearing the same as shearing and then rotating? What’s a fast way to answer this question?
|
Lecture 19: Introduction to Determinants (Nicholson Section 3.1) | Link | Linear Algebra Lecture Videos |
Alternate Video Access via MyMedia | Video Duration: 40:57
Description: Is the general 2x2 matrix A = [a,b;c,d] invertible? Used the matrix inversion algorithm on this general 2x2 matrix and found that in order for to be invertible we need ad-bc to be nonzero. Also, if ad-bc is nonzero then there’s a formula that we can memorize that gives us the inverse of A.
7:55 --- Stated a theorem for 2x2 matrices about whether or not they’re invertible.
8:30 Defined the determinant of a 2x2 matrix.
12:00 --- Defined the determinant of a 3x3 matrix using a formula. I do not have this formula memorized even though I’ve been using and teaching linear algebra for over 30 years. The reason I could write it down so quickly is because I was looking at the matrix and writing it down by knowing the definition in terms of cofactors (see 15:55 for 3x3 matrices and 29:10 for general NxN matrices) and applying that definition in real time.
14:25 --- For a general 3x3 matrix: if the third row is a multiple of the second row, showed that the determinant is zero. (You should make sure that you can also repeat this argument if the second row is a multiple of the third row.)
15:55 --- Noted that the determinant of a 3x3 matrix is found using a specific linear combination of determinants of 2x2 submatrices.
17:45 --- General discussion of computing determinants of 4x4 matrices, 5x5 matrices, 6x6 matrices --- how many 2x2 determinants will be needed? Computing a determinant’s a lot of work! (Will there be a faster way? We’ll see in the next lecture that there is.)
21:10 --- Defined the determinant of an NxN matrix in terms of a cofactor expansion along the first row. Defined what a cofactor is.
25:45 --- Computed the determinant of a specific 3x3 matrix.
29:00 --- How to use wolfram alpha to find the cofactors of a square matrix; you can use this to check your work.
31:55 --- Stated a theorem that states that the determinant of a square matrix can be computed by using a cofactor expansion about any row or column --- it doesn’t have to just be the first row. You can choose whatever row or column that makes it easiest for you.
33:50 --- Demonstrated the usefulness of this theorem by computing the determinant of an upper triangular matrix.
38:05 --- Theorem: If A is upper triangular or lower triangular or diagonal matrix then det(A) is the product of the diagonal entries of A.
39:25 --- Gave a 3x3 motivation for det(A) = det(A^T).
|
Lecture 1: An Introduction to Linear Systems (Nicholson, Section 1.1) | Link | Linear Algebra Lecture Videos |
Alternate Video Access via MyMedia | Video Duration: 26:50 | Description: Introduction to some areas where linear algebra appears.
0:45 Presented a “diet problem” which can be written as a linear system.
7:00 Wrote the diet problem in terms of linear combinations of vectors.
9:00 Solved the diet problem using methods from high school. (Note: you likely haven’t learnt about linear combinations of vectors yet but I hope that the explanation is clear enough that you can look past the language of “linear combinations” for the moment.
14:45 Wrote down a linear programming problem for the second diet problem (but didn’t solve it).
17:13 Introduced the language of systems of linear equations (unknowns, coefficients, linear, right-hand sides). Presented a system of 2 equations in 2 unknowns.
18:15 Discussed the system graphically and identified a solution of the system.
21:35 Performed elementary operations on the system of linear equations and studied the new linear systems graphically.
|
Lecture 20: Elementary row operations and determinants (Nicholson Section 3.1) | Link | Linear Algebra Lecture Videos |
Alternate Video Access via MyMedia | Video Duration: 50:56
Description:
1:40 --- Computed the determinant of a specific 3x3 matrix by doing a cofactor expansion about its second column.
6:00 --- Did elementary row operations on A and carried it to Row Echelon Form. Computed the determinant of the REF matrix. When computing determinants you don’t need to carry the matrix to REF, just to upper triangular form! Then you can use the fact that the determinant of an upper triangular matrix is the product of the diagonal entries.
9:30 --- Stated the effect of each elementary row operation on the determinant of a matrix and explained how to remember these rules.
14:05 --- revisited the previous example and figured out how to figure out the det(A) from the determinant of the REF matrix as long as you know the sequence of elementary row operations you took to get to the REF. If all you have is A and the REF matrix then you can’t find det(A) from the determinant of the REF matrix.
18:10 --- Is there any reason to carry A all the way to RREF? Did this for an example and showed that it still works but it’s an unnecessary amount of work if all you want is det(A). The real point of this example was to show why if det(A) is nonzero then the RREF of A must be the identity matrix (and therefore A is invertible). And if det(A) is zero then the RREF of A must have a row of zeros (and therefore A is not invertible).
31:25 --- Proved that if you create B by multiplying a row of A by t then det(B) = t det(A). Did this for a 3x3 matrix.
38:00 --- Proved that if create B by swapping two rows of A then det(B) = - det(A). I proved this by induction. I proved that it’s true for 2x2 matrices by using the definition of determinant. I showed how to leverage this knowledge about 2x2 matrices to prove that it’s true for 3x3 matrices. The next step is to leverage this knowledge about 3x3 matrices to prove that it’s true for 4x4 matrices. This goes on forever and this idea is the idea behind proof by induction.
48:40 --- Used the theorem to show that if A has a repeated row then det(A) = 0.
|
Lecture 21: Usefulness of the determinant: invertibility and geometry (Nicholson Section 3.2/Section 4.4) | Link | Linear Algebra Lecture Videos |
Alternate Video Access via MyMedia | Video Duration: 49:15
Description: Review of how the three elementary row operations affect the determinant.
6:50 --- If A is equivalent to B by some sequence of elementary row operations then det(B) equals some nonzero number times det(A). It follows that A is invertible if and only if det(A) is nonzero.
16:30 --- How determinants interact with products of square matrices: det(AB) = det(A)det(B). It follows that det(AB) = det(BA) and that if A is not invertible then AB and BA aren’t invertible either.
19:00 --- How to remember that det(AB) = det(A)det(B).
21:00 --- Did a classic midterm question involving determinants.
27:00 ---How to use determinant to compute the area of a parallelogram. Discussed why the absolute value is needed.
35:00 --- How the area of the image of a region under a linear mapping is determined by the determinant of the standard matrix for the linear mapping and the area of the region. The previous book introduced “standard matrix” early on which is why I’m referring to in these lectures; Nicholson only introduces it in chapter 9. So you don’t know this language. Here’s what “standard matrix” means. Nicholson refers to “the matrix of a linear transformation” at the bottom of page 106. This is the “standard matrix”; he just doesn’t call it that until page 497 (he’s trying to avoid confusing you too early, I assume). I proved this for a parallelogram and stated it for general regions in the plane. Note: the proof for general regions in the plane is a multivariable Calculus thing, not a linear algebra thing.
41:00 --- Example where the linear mapping is rotation.
43:30 --- Example where the linear mapping is reflection about a line.
47:00 --- Example where the linear mapping is projection onto a line.
|
Lecture 22: Powers of matrices, introduction to eigenvalues & eigenvectors (Nicholson Section 3.3) | Link | Linear Algebra Lecture Videos |
Alternate Video Access via MyMedia | Video Duration: 47:44
Description: Started with a 2x2 matrix and looked at what happened if I applied it over and over again to [1;0]. It’s converging to the vector [3;-2]. What’s with that? If I apply it over and over again to a different vector, I find that the result converges to something. Why? How did I compute A^40 power anyway?
5:25 --- Represented the matrix as a product of three matrices, one of which is diagonal. This made it super-easy to compute A^n and also to figure out where those limiting vectors were coming from.
16:20 --- Introduced the definition of an eigenvector of a linear mapping. Defined eigenvalue and eigenvector-eigenvalue pair.
18:40 --- Did geometric example --- what are the eigenvalues & eigenvectors for reflecting about a line? What do they mean geometrically?
23:10 --- Demonstrated that a nonzero multiple of an eigenvector is also an eigenvector.
27:20 --- Why do we require that eigenvectors be nonzero?
29:15 --- Did geometric example --- what are the eigenvalues & eigenvectors for projecting onto a line? What do they mean geometrically?
33:00 --- What about counterclockwise rotation by theta? Can you find a (real) eigenvector?
34:40 --- If I give you a matrix and a vector, how can you figure out if the vector is an eigenvector? If it is an eigenvector, how can you find its eigenvalue?
37:35 --- Given a matrix, how do I find its eigenvectors and eigenvalues? Tried the natural first idea --- tried to find the eigenvector vector and eigenvalue simultaneously. Got two nonlinear equations in three unknowns. Yikes!
41:15 --- Try to break the problem into two steps. First find the eigenvalues. Subsequently, for each eigenvalue try to find eigenvectors. Explained why we’re looking for lambdas so that det(A-lambda I) = 0.
|
Lecture 23: How to find eigenvalues and eigenvectors (Nicholson Section 3.3) | Link | Linear Algebra Lecture Videos |
Alternate Video Access via MyMedia | Video Duration: 49:24
Description: Note: In this lecture, I use the language of “linear combinations” --- you may not have seen this language yet but I hope it’s clear enough what is meant. Reviewed definition of eigenvector and eigenvalue. Reviewed why we’re looking for lambdas so that det(A-lambda I) = 0.
6:30 --- Returned to the reflection example from previous lecture. We know the eigenvalues & eigenvectors geometrically but how could we have found them algebraically? Worked through the example.
16:45 --- Important! What happens if you’d made a mistake when you computed your eigenvalues? What happens when you then try to find eigenvectors?
21:30 --- Did a 3x3 example. Here we don’t have geometric intuition and we’re going to have to compute the eigenvalues by finding the roots of a cubic polynomial. This example’s interesting because we get a repeated eigenvalue and so when we look for eigenvectors we get them in two different directions.
44:10 --- Important! If you add two eigenvectors together and they have different eigenvalues, is the sum also an eigenvector? No!
46:15 --- Slammed through a final 3x3 example, introduced the language of “algebraic multiplicity” of eigenvalues. In this example, there was a repeated eigenvalue but I couldn’t find two eigenvectors w/ different directions.
|