Size: 8541
Comment:
|
Size: 12222
Comment:
|
Deletions are marked like this. | Additions are marked like this. |
Line 3: | Line 3: |
Here are PDFs of the slides of the lectures so far: [[attachment:SearchEnginesWS0910/lecture-1.pdf|Lecture 1]], [[attachment:SearchEnginesWS0910/lecture-2.pdf|Lecture 2]], [[attachment:SearchEnginesWS0910/lecture-3.pdf|Lecture 3]], [[attachment:SearchEnginesWS0910/lecture-4.pdf|Lecture 4]]. | Here are PDFs of the slides of the lectures so far: [[attachment:SearchEnginesWS0910/lecture-1.pdf|Lecture 1]], [[attachment:SearchEnginesWS0910/lecture-2.pdf|Lecture 2]], [[attachment:SearchEnginesWS0910/lecture-3.pdf|Lecture 3]], [[attachment:SearchEnginesWS0910/lecture-4.pdf|Lecture 4]], [[attachment:SearchEnginesWS0910/lecture-5.pdf|Lecture 5]], [[attachment:SearchEnginesWS0910/lecture-6.pdf|Lecture 6]], [[attachment:SearchEnginesWS0910/lecture-7.pdf|Lecture 7]], [[attachment:SearchEnginesWS0910/lecture-8.pdf|Lecture 8]], [[attachment:SearchEnginesWS0910/lecture-9.pdf|Lecture 9]], [[attachment:SearchEnginesWS0910/lecture-10.pdf|Lecture 10]], [[attachment:SearchEnginesWS0910/lecture-11.pdf|Lecture 11]], [[attachment:SearchEnginesWS0910/lecture-12.pdf|Lecture 12]], [[attachment:SearchEnginesWS0910/lecture-13.pdf|Lecture 13]]. |
Line 5: | Line 5: |
Here are .lpd files of the recordings of the lectures so far (except Lecture 2, where we had problems with the microphone): [[http://vulcano.informatik.uni-freiburg.de/lecturnity/lecture-1.lpd|Lecture 1]] [[http://vulcano.informatik.uni-freiburg.de/lecturnity/lecture-3.lpd|Lecture 3]] [[http://vulcano.informatik.uni-freiburg.de/lecturnity/lecture-4.lpd|Lecture 4]]. | Here are the recordings of the lectures so far (except Lecture 2, where we had problems with the microphone), LPD = Lecturnity recording: [[http://vulcano.informatik.uni-freiburg.de/lecturnity/lecture-1.lpd|Recording Lecture 1 (LPD)]], [[http://vulcano.informatik.uni-freiburg.de/lecturnity/lecture-3.lpd|Recording Lecture 3 (LPD)]], [[http://vulcano.informatik.uni-freiburg.de/lecturnity/lecture-4.lpd|Recording Lecture 4 (LPD)]], [[http://vulcano.informatik.uni-freiburg.de/lecturnity/lecture-5.lpd|Recording Lecture 5 (LPD without audio)]], [[http://vulcano.informatik.uni-freiburg.de/lecturnity/lecture-6.lpd|Recording Lecture 6 (LPD)]], [[http://vulcano.informatik.uni-freiburg.de/lecturnity/lecture-7.avi|Recording Lecture 7 (AVI)]], [[http://vulcano.informatik.uni-freiburg.de/lecturnity/lecture-8.avi|Recording Lecture 8 (AVI)]], [[http://vulcano.informatik.uni-freiburg.de/lecturnity/lecture-9.avi|Recording Lecture 9 (AVI)]], [[http://vulcano.informatik.uni-freiburg.de/lecturnity/lecture-10.avi|Recording Lecture 10 (AVI)]], [[http://vulcano.informatik.uni-freiburg.de/lecturnity/lecture-11.avi|Recording Lecture 11 (AVI)]], [[http://vulcano.informatik.uni-freiburg.de/lecturnity/lecture-12.avi|Recording Lecture 12 (AVI)]], [[http://vulcano.informatik.uni-freiburg.de/lecturnity/lecture-13.avi|Recording Lecture 13 (AVI)]]. To play the Lecturnity recordings (.lpd files) you need the [[http://www.lecturnity.de/de/download/lecturnity-player|Lecturnity Player, which you can download here]]. I put the Camtasia recordings as .avi files, which you can play with any ordinary video player; I would recommend [[http://www.videolan.org/vlc|VLC]]. |
Line 7: | Line 7: |
Here are PDFs of the exercise sheets so far: [[attachment:SearchEnginesWS0910/exercise-1.pdf|Exercise Sheet 1]], [[attachment:SearchEnginesWS0910/exercise-2.pdf|Exercise Sheet 2]], [[attachment:SearchEnginesWS0910/exercise-3.pdf|Exercise Sheet 3]], [[attachment:SearchEnginesWS0910/exercise-4.pdf|Exercise Sheet 4]]. | Here are PDFs of the exercise sheets so far: [[attachment:SearchEnginesWS0910/exercise-1.pdf|Exercise Sheet 1]], [[attachment:SearchEnginesWS0910/exercise-2.pdf|Exercise Sheet 2]], [[attachment:SearchEnginesWS0910/exercise-3.pdf|Exercise Sheet 3]], [[attachment:SearchEnginesWS0910/exercise-4.pdf|Exercise Sheet 4]], [[attachment:SearchEnginesWS0910/exercise-5.pdf|Exercise Sheet 5]], [[attachment:SearchEnginesWS0910/exercise-6.pdf|Exercise Sheet 6]], [[attachment:SearchEnginesWS0910/exercise-7.pdf|Exercise Sheet 7]], [[attachment:SearchEnginesWS0910/exercise-8.pdf|Exercise Sheet 8]], [[attachment:SearchEnginesWS0910/exercise-9.pdf|Exercise Sheet 9]], [[attachment:SearchEnginesWS0910/exercise-10.pdf|Exercise Sheet 10]], [[attachment:SearchEnginesWS0910/exercise-11.pdf|Exercise Sheet 11]], [[attachment:SearchEnginesWS0910/exercise-12.pdf|Exercise Sheet 12]], [[attachment:SearchEnginesWS0910/exercise-13.pdf|Exercise Sheet 13]]. |
Line 9: | Line 9: |
Here are your solutions and comments on the previous exercise sheets: [[SearchEnginesWS0910/ExerciseSheet1|Solutions and Comments 1]], [[SearchEnginesWS0910/ExerciseSheet2|Solutions and Comments 2]], [[SearchEnginesWS0910/ExerciseSheet3|Solutions and Comments 3]] | Here are your solutions and comments on the previous exercise sheets: [[SearchEnginesWS0910/ExerciseSheet1|Solutions and Comments 1]], [[SearchEnginesWS0910/ExerciseSheet2|Solutions and Comments 2]], [[SearchEnginesWS0910/ExerciseSheet3|Solutions and Comments 3]], [[SearchEnginesWS0910/ExerciseSheet4|Solutions and Comments 4]], [[SearchEnginesWS0910/ExerciseSheet5|Solutions and Comments 5]], [[SearchEnginesWS0910/ExerciseSheet6|Solutions and Comments 6]], [[SearchEnginesWS0910/ExerciseSheet7|Solutions and Comments 7]], [[SearchEnginesWS0910/ExerciseSheet8|Solutions and Comments 8]], [[SearchEnginesWS0910/ExerciseSheet9|Solutions and Comments 9]], [[SearchEnginesWS0910/ExerciseSheet10|Solutions and Comments 10]], [[SearchEnginesWS0910/ExerciseSheet11|Solutions and Comments 11]], [[SearchEnginesWS0910/ExerciseSheet12|Solutions and Comments 12]]. |
Line 11: | Line 11: |
= Exercise Sheet 3 = The recordings of all lectures are now available, see above. Lecture 2 is missing because we had technical problems there. To play the recordings (it's .lpd files) you need the Lecturnity Player. [[http://www.lecturnity.de/de/download/lecturnity-player|You can download the player for free here]]. |
Here are our master solutions: [[attachment:SearchEnginesWS0910/solution-midterm.pdf|Master solution for Mid-Term Exam]],[[attachment:SearchEnginesWS0910/solution-9.pdf|Master solution for Exercise Sheet 9]], [[attachment:SearchEnginesWS0910/solution-10.pdf|Master solution for Exercise Sheet 10]]. |
Line 16: | Line 15: |
[[SearchEnginesWS0910/ExerciseSheet4|Here you can upload your solutions for Exercise Sheet 4]]. | [[SearchEnginesWS0910/MidTermExam|Here is everything about the mid-term exam]]. The final exam is on Friday March 12, 2010. The written exam begins at 2.00 pm in HS 026. The oral exams are scheduled on the same day. |
Line 18: | Line 17: |
== Questions or comments below this line, most recent on top please == | [[SearchEnginesWS0910/ExerciseSheet13|Here is the table with the links to your uploaded solutions for Exercise Sheet 13]]. The deadline is Thursday 11Feb10 16:00. |
Line 20: | Line 19: |
I'd like suggest that everyone grades the exercise sheet from 1 (for "way to easy") to 10 ("way to hard"). This might provide the professor with the feedback she asks for in the lecture. How about that idea? '''Johannes 2009-11-15T20:40L''' | == Questions and comments about Exercise Sheet 13 below this line (most recent on top) == |
Line 22: | Line 21: |
To Florian + all: yes, sorry, I forgot to mention this in the lecture. Marjan already explained how to clear the disk cache. Let me add to this an explanation what the disk cache actually is. Whenever you read a (part of a) file from disk, the operating system of your computed will use whatever memory is currently unused to store that (part of the) file there. When you read it again and the (part of the) file hasn't changed and the memory used to store it has not been used otherwise in the meantime, than that data is read right from memory, which is much faster than reading it from disk. Usually that effect is desirable, because it speeds up things, but when you do experiments, it is undesirable, because it leads to unrealistically good running times, especially when carrying out an experiment many times in a row. '''Hannah 15Nov09 8:10pm''' | Hi Dragos. Regarding exercise 4 - I am not sure what exactly you mean. The proof should be as formal as any other proof. Regarding exercise 5 - you're right and I have the same problem. One has to do the deletion in O(log n) but this is not provided by the standard priority queue implementation in C++, Java etc. Ideally one should do his own implementation of priority queue but then this would be more work, so I am not sure myself (the professor is still at Google). '''Marjan 10Feb10 12:06''' |
Line 24: | Line 23: |
To Florian: Indeed, we were running out of time and there was no room for this in the lecture. I can suggest to you few ways how to clear the disk cache: before carrying out your final experiment, read a large amount of data (let's say close to the amount of RAM you have) from disk - this will ensure that your data (the inverted list) is cleared from the disk cache and replaced by something else (thus an actual reading from disk get's timed, and not reading from RAM). Another way is to restart your computer before doing the timing. '''Marjan 15Nov09 7:27pm''' | My second question regards Exercise 4. Should we adopt a formal way to prove the two statements (like induction) or is it sufficient to explain in words what happens at each iteration and why the final result will be the one mentioned in the exercise. '''Dragos 10th of February, 10.53''' |
Line 26: | Line 25: |
In exercise 4 it says: "Important note: Whenver you measure running times for reading data from disk, you have to clear the disk cache before, as discussed in the lecture". I think that this was not discussed in the lecture? What do we have to do here? '''Florian 15Nov09 7:15pm''' | Hello. I was looking at the pseudo-code from the Information Retrieval Book (for the first exercise) and it says that the complexity of the algorithm is O(n^2*log n) if the insert and delete operations for the priority queue are done in O(log n)... which makes sense. The problem is that they use a delete(element) operation, which as I have looked in the implementations for priority queues in Java or Perl have a complexity greater then log(n) (commonly .. O(n)). I don't really see how to do it in log(n) time. '''Dragos 9th of February, 23.33''' |
Line 28: | Line 27: |
@Bit shifting: The syntax for that is actually the same, irrespectively of whether you use Java, C++, perl, python, or whatever. The >> operator shifts to the right, the << operator shifts to the left, the & operator ands the bits of the two operands and the | operator ors the bits of the two operands. Very simple. You will also find zillions of example programs on the web by typing something like ''java bit shifting'' into Google or whatever your favorite search engine is. '''Hannah 15Nov09 1:16''' | @Marjan: The implementation of the lecture would work for n=10^4^ (I have 2GB RAM) but it takes a very long time. The memory exception is thrown when I create the matrix C of the pseudo code from the information retrieval book, but since the priority queue contains the same number of elements as the matrix I think there is not much difference. '''Florian 8Feb 21:29''' |
Line 30: | Line 29: |
Hi Marius + all: For Exercise 4, an inverted list of size m with doc ids from the range [1..n] is simply a sorted list of m numbers from the range [1..n]. I leave it to you, whether your lists potentially contain duplicates (as in 3, 5, 5, 8, 12, ...) or whether you generate them in a way that they don't contain duplicates (as in 3, 5, 8, 17, ...). It doesn't really matter for the exercise whether your list has duplicated or not. In any case, consider simple flat lists like in the two examples I gave (and like all the examples I gave in this and past lectures), not lists of lists or anything. '''Hannah 15Nov09 1:12am''' | Another small observation: clearly, if n = 10^5^, then you have n^2^ = 10^10^ entries in your SIM matrix. Say each entry has size of 4 bytes (float), this makes it 37.2GB in total! Obviously you can't go above n=10^4^. '''Marjan 8Feb 21:09''' |
Line 32: | Line 31: |
@Mirko: Sure, but an inverted list is a list of words where the Doc-IDs are attached to each words in which the words occur. So for Example: If word no. 5 occurs in Doc1, Doc2 and Doc3 and word no. 2 occurs in Doc5, the list would look like: 5 -> Doc1, Doc2, Doc3; 2 -> Doc5. Or am I mistaken? My question then is, how long should these attached lists be in average case? I mean, one could imagine that we got 1mil. documents over 3 words, so these lists could get very large... | Hi Florian. I have two questions. First, how far in terms of n can you go with the implementation from the lecture? Second, if the difference is significant, could you give a short but precise statement where does most of the memory in your algorithm get consumed? Is it the similarity matrix or the priority queue? '''Marjan 8Feb10 21:02''' |
Line 34: | Line 33: |
EDIT: Oh ok. Now, I see your point. It's not an index, it's a list. Okay. So, what is an inverted list with Doc-IDs, then? | After implementing the algorithm from the information retrieval book for exercise 1 I already get an out of memory exception for n=10^4^. Just having the values for n=10^1^ ... 10^3^ is not much to see if T/n^2^ is indeed logarithmic, but is it enough for the exercise? '''Florian 8Feb10 20:40''' |
Line 36: | Line 35: |
EDIT EDIT: And to your question, Mirko, take a look at [[http://snippets.dzone.com/posts/show/93]]. Especially at Comment no. 2. Maybe this helps... I think, Java supports StreamWriters/Readers that are able to write/read bytes. '''Marius 11/14/2009 08:46pm''' | '''FYI: on Thursday, we will have 45 minutes of regular lecture (16.15 - 17.00 h), and then the intro lecture for the bachelor / master's project (17.00 - 17.45 h). Those who are interested in doing a project, can just stay, the others may go then.''' '''Hannah 8Feb10 14:08''' |
Line 38: | Line 37: |
EDIT EDIT EDIT: Sorry, me again. Well, I bothered Wikipedia which redirects from [[http://en.wikipedia.org/wiki/Inverted_list]] to Inverted Index. So it seems to me, this is being used as a synonym. Actually, I think I'm confused enough, now. I'll better wait for any responses... ;-) '''Marius 11/14/2009 9:08 pm''' | Hi Florian + all: yes, it's possible for all the four heuristics we have discussed in the lecture, including complete-link. Conceptually, you have to maintain the n x n similarity matrix SIM, where SIM[i][j] gives you the similarity of the current cluster i with the current cluster j. In each iteration, we consider only those entries SIM[i][j] where i and j are cluster represantatives (that is, they are the smallest index of an element in their cluster) and where i != j (we never want to merge a cluster with itself). Initially each point is a cluster on its own and the SIM[i][j] are just the pairwise point similarities, as in the code we wrote together in the lecture. Now in each iteration, we need to find that pair (i,j) of cluster representative for which SIM[i][j] is maximal. In the lecture we did it the trivial way, by a double loop. That requires quadratic time per iteration, giving cubic time overall. For the exercise, you shoudl instead use one priority queue per cluster representative i to store all SIM[i][j] for all valid cluster representatives j. Then for each i you get the argmax_j SIM[i][j] in logarithmic time instead of in linear time, and doing this for each i you get the argmax_(i,j) SIM[i][j] in n * log n time instead of in n^2 time per iteration. Once you have found the pair of clusters to merge, you have to update both SIM[i][j] and the priority queues appropriately. The former can be done in linear time per iteration and the latter can be done with a linear number of priority queue operations. BTW, you find pseudo-code of the algorithm in the information retrieval book: http://nlp.stanford.edu/IR-book/html/htmledition/time-complexity-of-hac-1.html maybe that helps you, too. '''Hannah 7Feb10 20:49''' |
Line 40: | Line 39: |
@ Marius: i think we are supposed to generate one inverted __list__ of size m, with doc ids from 1..n (therefore n>=m, because no duplicates?). | Hi, in exercise 1, is it really possible to implement the complete-link heuristic with a priority queue so that the finding of the next two clusters which have to be merged only needs logarithmic time? If I did it right, then we have to fill the priority queue with the possible pairs of clusters and use their similarity with the complete-link heuristic to sort the queue. But when two clusters are merged now, then the distances of the combined cluster (i.e. the one with the smaller index) to the other clusters can change, so after each merge the priority queue has to be rebuilt to ensure it's heap property. But a complete rebuild of the queue is quite time consuming, or am I doing anything wrong? '''Florian 7Feb10 20:09''' |
Line 42: | Line 41: |
Now a question from my side: ex.4, programming the compression in __java__, is there any __good__ tutorial about how to handle the bit-stuff? (otherwise, i think, it would cost me too much time..) '''Mirko 14Nov09, 19:18''' | Hi Mirko, you are free to modify the existing program (linked to in my first comment below), or write the program from scratch (or transcribe my C++ program) in a programming language of your choice. '''Hannah 7Feb10 16:40''' |
Line 44: | Line 43: |
Hi, do you have any suggestions what the best numbers for m and n in exercise 4 should look like? Or are we supposed to mess around a bit with ints and longs? And: How long should the list of documents in the inverted index be? '''Marius 14Nov09 6:40pm''' | Hi, one question about Exercise 1: the text says "Modify the code", does this mean we have to extend the C++-program, or are we also allowed to write the complete program in an other language like Java? '''Mirko 7Feb10 16:33''' |
Line 46: | Line 45: |
And just to clarify what a single-cycle permutation is. Here is an example for an array of size 5 with a permutation that is a single cycle: 5 4 1 3 2. Why single cycle? Well, A[1] = 5, A[5] = 2, A[2] = 4, A[4] = 3, A[3] = 1. (My indices in this example are 1,...,5 and not 0,...,4.) Here is an example of a permutation with three cycles: 2 1 4 3 5. The first cycle is A[1] = 2, A[2] =1. The second cycle is A[3] = 4, A[4] = 3. The third cycle is A[5] = 5. '''Hannah 12Nov09 8:04pm''' | I can't say now with certainty which courses exactly I will offer in a year. It is likely though, that we will offer bachelor / master projects on a regular basis. '''Hannah 5Feb10 21:17''' |
Line 48: | Line 47: |
Hi Daniel + all, I don't quite understand your question and your example (if your array is 1 5 3 4 2, why is A[1] = 3?). In case you refer to the requirement of the exercise that the permutation consists only of a single cycle. That is because your code should go over each element exactly once (it should, of course, stop after n iterations, where n is the size of the array). If your permutation has more than one cycle, it is hard to achieve that. Also note that for both (1) and (2), the sum of the array values should be sum_i=1,...,n i = n * (n+1) / 2. '''Hannah 12Nov09 7:54pm''' | Yes, Marjan is right. '''Johannes 2010-02-05:1931''' |
Line 50: | Line 49: |
Hi, I just looked at the new exercise sheet 4, in exercise 1 we should generate a permutation and sum the resulting array up, am I wrong or doesn't iterating method two iterate throw the whole array in every situation. for ex.: n= 5 permutation: 1 5 3 4 2, then A[1] = 3, A[A[1]]= A[3] = 1, A[1] = 3 ... '''Daniel 12Nov09 19:44pm''' | Probably Johannes wants/has to do his project next year. '''Marjan 5Feb10 18:32''' Hi Johannes, I don't quite understand your comment, what do you mean? '''Hannah 5Feb10 17:33''' Even if it was in one year? '''Johannes 2010-02-05:1719''' If you are interested in doing the Bachelor / Master's project (following this course as a block course), please send me an email asap: bast@informatik.uni-freiburg.de . '''Hannah 4Deb10 19:30''' Here is the [[attachment:HierarchicalClustering.cpp|hierarchical clustering code]] which we have written in the lecture. '''Hannah 2Feb10 18:47''' |
Welcome to the Wiki page of the course Search Engines, WS 2009 / 2010. Lecturer: Hannah Bast. Tutorials: Marjan Celikik. Course web page: click here.
Here are PDFs of the slides of the lectures so far: Lecture 1, Lecture 2, Lecture 3, Lecture 4, Lecture 5, Lecture 6, Lecture 7, Lecture 8, Lecture 9, Lecture 10, Lecture 11, Lecture 12, Lecture 13.
Here are the recordings of the lectures so far (except Lecture 2, where we had problems with the microphone), LPD = Lecturnity recording: Recording Lecture 1 (LPD), Recording Lecture 3 (LPD), Recording Lecture 4 (LPD), Recording Lecture 5 (LPD without audio), Recording Lecture 6 (LPD), Recording Lecture 7 (AVI), Recording Lecture 8 (AVI), Recording Lecture 9 (AVI), Recording Lecture 10 (AVI), Recording Lecture 11 (AVI), Recording Lecture 12 (AVI), Recording Lecture 13 (AVI). To play the Lecturnity recordings (.lpd files) you need the Lecturnity Player, which you can download here. I put the Camtasia recordings as .avi files, which you can play with any ordinary video player; I would recommend VLC.
Here are PDFs of the exercise sheets so far: Exercise Sheet 1, Exercise Sheet 2, Exercise Sheet 3, Exercise Sheet 4, Exercise Sheet 5, Exercise Sheet 6, Exercise Sheet 7, Exercise Sheet 8, Exercise Sheet 9, Exercise Sheet 10, Exercise Sheet 11, Exercise Sheet 12, Exercise Sheet 13.
Here are your solutions and comments on the previous exercise sheets: Solutions and Comments 1, Solutions and Comments 2, Solutions and Comments 3, Solutions and Comments 4, Solutions and Comments 5, Solutions and Comments 6, Solutions and Comments 7, Solutions and Comments 8, Solutions and Comments 9, Solutions and Comments 10, Solutions and Comments 11, Solutions and Comments 12.
Here are our master solutions: Master solution for Mid-Term Exam,Master solution for Exercise Sheet 9, Master solution for Exercise Sheet 10.
Here are the rules for the exercises as explained in Lecture 2.
Here is everything about the mid-term exam. The final exam is on Friday March 12, 2010. The written exam begins at 2.00 pm in HS 026. The oral exams are scheduled on the same day.
Here is the table with the links to your uploaded solutions for Exercise Sheet 13. The deadline is Thursday 11Feb10 16:00.
Questions and comments about Exercise Sheet 13 below this line (most recent on top)
Hi Dragos. Regarding exercise 4 - I am not sure what exactly you mean. The proof should be as formal as any other proof. Regarding exercise 5 - you're right and I have the same problem. One has to do the deletion in O(log n) but this is not provided by the standard priority queue implementation in C++, Java etc. Ideally one should do his own implementation of priority queue but then this would be more work, so I am not sure myself (the professor is still at Google). Marjan 10Feb10 12:06
My second question regards Exercise 4. Should we adopt a formal way to prove the two statements (like induction) or is it sufficient to explain in words what happens at each iteration and why the final result will be the one mentioned in the exercise. Dragos 10th of February, 10.53
Hello. I was looking at the pseudo-code from the Information Retrieval Book (for the first exercise) and it says that the complexity of the algorithm is O(n^2*log n) if the insert and delete operations for the priority queue are done in O(log n)... which makes sense. The problem is that they use a delete(element) operation, which as I have looked in the implementations for priority queues in Java or Perl have a complexity greater then log(n) (commonly .. O(n)). I don't really see how to do it in log(n) time. Dragos 9th of February, 23.33
@Marjan: The implementation of the lecture would work for n=104 (I have 2GB RAM) but it takes a very long time. The memory exception is thrown when I create the matrix C of the pseudo code from the information retrieval book, but since the priority queue contains the same number of elements as the matrix I think there is not much difference. Florian 8Feb 21:29
Another small observation: clearly, if n = 105, then you have n2 = 1010 entries in your SIM matrix. Say each entry has size of 4 bytes (float), this makes it 37.2GB in total! Obviously you can't go above n=104. Marjan 8Feb 21:09
Hi Florian. I have two questions. First, how far in terms of n can you go with the implementation from the lecture? Second, if the difference is significant, could you give a short but precise statement where does most of the memory in your algorithm get consumed? Is it the similarity matrix or the priority queue? Marjan 8Feb10 21:02
After implementing the algorithm from the information retrieval book for exercise 1 I already get an out of memory exception for n=104. Just having the values for n=101 ... 103 is not much to see if T/n2 is indeed logarithmic, but is it enough for the exercise? Florian 8Feb10 20:40
FYI: on Thursday, we will have 45 minutes of regular lecture (16.15 - 17.00 h), and then the intro lecture for the bachelor / master's project (17.00 - 17.45 h). Those who are interested in doing a project, can just stay, the others may go then. Hannah 8Feb10 14:08
Hi Florian + all: yes, it's possible for all the four heuristics we have discussed in the lecture, including complete-link. Conceptually, you have to maintain the n x n similarity matrix SIM, where SIM[i][j] gives you the similarity of the current cluster i with the current cluster j. In each iteration, we consider only those entries SIM[i][j] where i and j are cluster represantatives (that is, they are the smallest index of an element in their cluster) and where i != j (we never want to merge a cluster with itself). Initially each point is a cluster on its own and the SIM[i][j] are just the pairwise point similarities, as in the code we wrote together in the lecture. Now in each iteration, we need to find that pair (i,j) of cluster representative for which SIM[i][j] is maximal. In the lecture we did it the trivial way, by a double loop. That requires quadratic time per iteration, giving cubic time overall. For the exercise, you shoudl instead use one priority queue per cluster representative i to store all SIM[i][j] for all valid cluster representatives j. Then for each i you get the argmax_j SIM[i][j] in logarithmic time instead of in linear time, and doing this for each i you get the argmax_(i,j) SIM[i][j] in n * log n time instead of in n^2 time per iteration. Once you have found the pair of clusters to merge, you have to update both SIM[i][j] and the priority queues appropriately. The former can be done in linear time per iteration and the latter can be done with a linear number of priority queue operations. BTW, you find pseudo-code of the algorithm in the information retrieval book: http://nlp.stanford.edu/IR-book/html/htmledition/time-complexity-of-hac-1.html maybe that helps you, too. Hannah 7Feb10 20:49
Hi, in exercise 1, is it really possible to implement the complete-link heuristic with a priority queue so that the finding of the next two clusters which have to be merged only needs logarithmic time? If I did it right, then we have to fill the priority queue with the possible pairs of clusters and use their similarity with the complete-link heuristic to sort the queue. But when two clusters are merged now, then the distances of the combined cluster (i.e. the one with the smaller index) to the other clusters can change, so after each merge the priority queue has to be rebuilt to ensure it's heap property. But a complete rebuild of the queue is quite time consuming, or am I doing anything wrong? Florian 7Feb10 20:09
Hi Mirko, you are free to modify the existing program (linked to in my first comment below), or write the program from scratch (or transcribe my C++ program) in a programming language of your choice. Hannah 7Feb10 16:40
Hi, one question about Exercise 1: the text says "Modify the code", does this mean we have to extend the C++-program, or are we also allowed to write the complete program in an other language like Java? Mirko 7Feb10 16:33
I can't say now with certainty which courses exactly I will offer in a year. It is likely though, that we will offer bachelor / master projects on a regular basis. Hannah 5Feb10 21:17
Yes, Marjan is right. Johannes 2010-02-05:1931
Probably Johannes wants/has to do his project next year. Marjan 5Feb10 18:32
Hi Johannes, I don't quite understand your comment, what do you mean? Hannah 5Feb10 17:33
Even if it was in one year? Johannes 2010-02-05:1719
If you are interested in doing the Bachelor / Master's project (following this course as a block course), please send me an email asap: bast@informatik.uni-freiburg.de . Hannah 4Deb10 19:30
Here is the hierarchical clustering code which we have written in the lecture. Hannah 2Feb10 18:47