Exercise Sheet 10
The rules for uploading are the same as always. If you forgot them, you can read them again here.
Your solutions (files can only be read by the uploader and by us)
Name |
Solution (PDF) |
Code (ZIP or TGZ) |
These were the comments and questions for Exercise Sheet 10
It's there now. Hannah 21Jan10 00:59
The page to upload the solutions is still missing. Florian 21Jan10 00:02
To Florian: it's enough if you output the Eigenvector, but it would also be nice if you output an approximation of the eigenvalue. There is no unqiue procedure for doing this, however, and that's why I didn't include it formally in the exercise. The point is that what you will compute after a certain number of iterations is only almost an eigenvector, and how do you compute an approximate eigenvalue for something that is only almost an eigenvector. There are many ways to do it. One is to just take the quotient of each component of x_k and A*x_k, and then take the average of these components. Or the min and the max quotient and output both, that is probably more reasonable. Hannah 20Jan10 23:14
Please give me details on which exercise cost you how much time and why. Sorry, I couldn't respond earlier, I was working from home, and the internet connection to the university has been extremely bad all day long. Hannah 20Jan10 23:08
I think that I also can't complete the sheet this time (especially not in 6 hours) and it's a pity that not the worst sheet of all doesn't count. I know that some of us now have to priorize because in the beginning of February (and later) some exams will be written and also there will be seminars. That means that not the new bonus point system would be the reason for stopping after a certain time of solving tasks, but the general situation. Manuela 20Jan10 21:06
I used up my 6 hours for the exercise sheet and I couldn't complete it. What is the proceeding in this case? Johannes 2763-01-20T1934
Thanks for the additional hint for exercise 2, I think I finally managed to get a solution with it. Then I have another question to exercise 4: is it enough if the program outputs the eigenvector or should it also give the corresponding maximal eigenvalue? Florian 20Jan10 18:25
Hi Mirko: no, I meant what I wrote. You can easily check that it makes sense from the matrix / vector dimensions: u_i is an m x 1 vector, s_i is an 1 x 1 scalar, v_i is an n x 1 vector and hence v_i' (the transpose of v_i) is a 1 x n vector. Hence wrt to matrix dimensions u_i * s_i * v_i' is m x 1 * 1 x 1 * 1 x n, which matches as it should and gives an m x n matrix. Ok? Hannah 19Jan10 23:32
About the last post: with A = sum{i=1, .... you mean ||A|| = sum{i=1, ..., so the frobenius-norm of A? and by v_i' you mean the row of i of V'? If so, i don't get ||A|| = sum{i=1,...,r} u_i * s_i * v_i'. Or am i totally wrong? I'm just confused. Mirko, 19.01, 22:18
Hi Björn + all: let the SVD of A be U * S * V' and let u_1,...,u_r be the r columns of U, and let v_1,...,v_r be the r columns of V (or, equivalently, the r rows of V'), where r is the rank of the matrix. Let s_1,...,s_r be the r diagnonal entries of S. Then convince yourself that you can write A = sum_{i=1,...,r} u_i * s_i * v_i'. With that, you easily get the SVD of A_k and also of A - A_k. Tell me if this helped you. Hannah 19Jan10 17:32
Hey, is it possible that for exercise 2 it's ||A||-||A_k|| instead of ||A-A_k||? We've just discussed this because we both were stuck and the later seems more obvious to us and provable with the hints provided. Of course, it is possible that we're mistaken somewhere. We managed to proove the lemmas from the hint but still fail to proove the statement from the exercise. Otherwise, we might need a hint to show that ||A-A_k|| = ||S-S_k|| while S_k is S with the values s_ij where i,j>k set to 0. Bjoern 19.1. 17:18
Ok, I added the marks to your individual pages now. If a mark does not correspond to the assignment scheme in my post below, please tell us. Hannah 19Jan10 1:56am
Hi Florian + all: yes, you are right, a matrix-vector multiplication is all that is needed to implement the power method. Concerning you mark for the first eight exercise sheets, you can easily compute it from the following point range -> mark assigment: 28 - 30 points = 1.0, 26 - 27 points = 1.3, 24-25 points = 1.7, 22 - 23 points = 2.0, 20 - 21 points = 2.3, 18 - 19 points = 2.7, 16 - 17 points = 3.0, 14 - 15 points = 3.3, 12 - 13 points = 3.7, 10 - 11 points = 4.0. Your total number of points (with the worst exercise sheet taken out of the counting) is already on your page, I will add the corresponding marks now. Hannah 19Jan10 1:45am
Am I right with the assumption that we do not even need a matrix-matrix multiplication for exercise 4 but just a matrix-vector multiplication? And another question, where can I find the mark for the first 8 exercise sheets? Thanks. Florian 18.Jan10 22:38
Oh yes you're right. I forgot to transpose V. Sorry my mistake. Jonas 18.Jan10 21:49
Hi Jonas, no I think it's correct, V is n x k and then the transpose of V is k x n, and that is what appears in the product of the SVD. Hannah 18Jan10 21:39
One Question: On slide 11 of lecture 10, shouldn't V be a kxn matrix instead of a nxk? Jonas 18.Jan10 20:39
Hi Jens + all: we applied this rule to your exercise sheets until the christmas break, that is, we simply took your worst sheet out of the counting. Given that there are only 4, at most 5, exercise sheets after the christmas break, the rule does not apply for those sheets. Hannah 18Jan10 18:19
I have a question about skipping an exercise sheet: At the beginning of the semester you said that we would be allowed to do this once. Does it still hold? Jens 18Jan10 18:07
Oh yes, sorry, I forgot, it's done now. Hannah 17Jan10 13:43
Matthias - Can you please upload the Slides for the Lec 10 as well?
Here is a major hint for Exercise 2. There are many ways to prove this, but one of the most natural is via these three steps, each of which is not too hard to prove. Note that, unlike in the lecture, in the hint below I use X' to denote the transpose of a matrix or vector X. This is a common notation in the numerics community (where as the pure algebra people prefer the T superscript). Hannah 14Jan10 23:37
(1) Prove that for an m x m matrix U with U' * U = I and for an arbitrary m x 1 vector x, the L2-norms of x and U*x are equal. The L2-norm of a vector x is defined as the square root of the sum of the squares of its components. It helps to observe that the square of the L2-norm of x can also be written as x' * x. (2) Prove that for an m x m matrix U with U' * U = I and for an arbitrary m x n matrix A, the L2-norms of A and of U * A are equal. This is easy to prove using (1). (3) Prove that if A has the singular value decomposition U * S * V', then the L2-norm of A is the same as the L2-norm of S.