Size: 21066
Comment:
|
Size: 20785
Comment:
|
Deletions are marked like this. | Additions are marked like this. |
Line 3: | Line 3: |
Here are PDFs of the slides of the lectures so far: [[attachment:SearchEnginesWS0910/lecture-1.pdf|Lecture 1]], [[attachment:SearchEnginesWS0910/lecture-2.pdf|Lecture 2]], [[attachment:SearchEnginesWS0910/lecture-3.pdf|Lecture 3]], [[attachment:SearchEnginesWS0910/lecture-4.pdf|Lecture 4]], [[attachment:SearchEnginesWS0910/lecture-5.pdf|Lecture 5]], [[attachment:SearchEnginesWS0910/lecture-6.pdf|Lecture 6]], [[attachment:SearchEnginesWS0910/lecture-7.pdf|Lecture 7]], [[attachment:SearchEnginesWS0910/lecture-8.pdf|Lecture 8]], [[attachment:SearchEnginesWS0910/lecture-9.pdf|Lecture 9]], [[attachment:SearchEnginesWS0910/lecture-10.pdf|Lecture 10]], [[attachment:SearchEnginesWS0910/lecture-11.pdf|Lecture 11]], [[attachment:SearchEnginesWS0910/lecture-12.pdf|Lecture 12]]. | Here are PDFs of the slides of the lectures: [[attachment:SearchEnginesWS0910/lecture-1.pdf|Lecture 1]] (Intro), [[attachment:SearchEnginesWS0910/lecture-2.pdf|Lecture 2]] (socket communication), [[attachment:SearchEnginesWS0910/lecture-3.pdf|Lecture 3]] (ranking), [[attachment:SearchEnginesWS0910/lecture-4.pdf|Lecture 4]] (IO-efficiency & compression), [[attachment:SearchEnginesWS0910/lecture-5.pdf|Lecture 5]] (list intersection), [[attachment:SearchEnginesWS0910/lecture-6.pdf|Lecture 6]] (prefix search), [[attachment:SearchEnginesWS0910/lecture-7.pdf|Lecture 7]] (javascript), [[attachment:SearchEnginesWS0910/lecture-8.pdf|Lecture 8]] (error-tolerant search), [[attachment:SearchEnginesWS0910/lecture-9.pdf|Lecture 9]] (programming languages & UTF-8), [[attachment:SearchEnginesWS0910/lecture-10.pdf|Lecture 10]] (latent semantic indexing), [[attachment:SearchEnginesWS0910/lecture-11.pdf|Lecture 11]] (naive bayes classification), [[attachment:SearchEnginesWS0910/lecture-12.pdf|Lecture 12]] (clustering), [[attachment:SearchEnginesWS0910/lecture-13.pdf|Lecture 13]] (hierarchical clustering), [[attachment:SearchEnginesWS0910/lecture-14.pdf|Lecture 14]] (hypothesis testing), [[attachment:SearchEnginesWS0910/lecture-projects.pdf|Projects]]. |
Line 5: | Line 5: |
Here are the recordings of the lectures so far (except Lecture 2, where we had problems with the microphone), LPD = Lecturnity recording: [[http://vulcano.informatik.uni-freiburg.de/lecturnity/lecture-1.lpd|Recording Lecture 1 (LPD)]], [[http://vulcano.informatik.uni-freiburg.de/lecturnity/lecture-3.lpd|Recording Lecture 3 (LPD)]], [[http://vulcano.informatik.uni-freiburg.de/lecturnity/lecture-4.lpd|Recording Lecture 4 (LPD)]], [[http://vulcano.informatik.uni-freiburg.de/lecturnity/lecture-5.lpd|Recording Lecture 5 (LPD without audio)]], [[http://vulcano.informatik.uni-freiburg.de/lecturnity/lecture-6.lpd|Recording Lecture 6 (LPD)]], [[http://vulcano.informatik.uni-freiburg.de/lecturnity/lecture-7.avi|Recording Lecture 7 (AVI)]], [[http://vulcano.informatik.uni-freiburg.de/lecturnity/lecture-8.avi|Recording Lecture 8 (AVI)]], [[http://vulcano.informatik.uni-freiburg.de/lecturnity/lecture-9.avi|Recording Lecture 9 (AVI)]], [[http://vulcano.informatik.uni-freiburg.de/lecturnity/lecture-10.avi|Recording Lecture 10 (AVI)]], [[http://vulcano.informatik.uni-freiburg.de/lecturnity/lecture-11.avi|Recording Lecture 11 (AVI)]], [[http://vulcano.informatik.uni-freiburg.de/lecturnity/lecture-12.avi|Recording Lecture 12 (AVI)]]. | Here are the recordings of the lectures (except Lecture 2, where we had problems with the microphone), LPD = Lecturnity recording: [[http://vulcano.informatik.uni-freiburg.de/recordings/searchengines-ws0910/lecture-1.lpd|Recording Lecture 1 (LPD)]], [[http://vulcano.informatik.uni-freiburg.de/recordings/searchengines-ws0910/lecture-3.lpd|Recording Lecture 3 (LPD)]], [[http://vulcano.informatik.uni-freiburg.de/recordings/searchengines-ws0910/lecture-4.lpd|Recording Lecture 4 (LPD)]], [[http://vulcano.informatik.uni-freiburg.de/recordings/searchengines-ws0910/lecture-5.lpd|Recording Lecture 5 (LPD without audio)]], [[http://vulcano.informatik.uni-freiburg.de/recordings/searchengines-ws0910/lecture-6.lpd|Recording Lecture 6 (LPD)]], [[http://vulcano.informatik.uni-freiburg.de/recordings/searchengines-ws0910/lecture-7.avi|Recording Lecture 7 (AVI)]], [[http://vulcano.informatik.uni-freiburg.de/recordings/searchengines-ws0910/lecture-8.avi|Recording Lecture 8 (AVI)]], [[http://vulcano.informatik.uni-freiburg.de/recordings/searchengines-ws0910/lecture-9.avi|Recording Lecture 9 (AVI)]], [[http://vulcano.informatik.uni-freiburg.de/recordings/searchengines-ws0910/lecture-10.avi|Recording Lecture 10 (AVI)]], [[http://vulcano.informatik.uni-freiburg.de/recordings/searchengines-ws0910/lecture-11.avi|Recording Lecture 11 (AVI)]], [[http://vulcano.informatik.uni-freiburg.de/recordings/searchengines-ws0910/lecture-12.avi|Recording Lecture 12 (AVI)]], [[http://vulcano.informatik.uni-freiburg.de/recordings/searchengines-ws0910/lecture-13.avi|Recording Lecture 13 (AVI)]], [[http://vulcano.informatik.uni-freiburg.de/recordings/searchengines-ws0910/lecture-14.avi|Recording Lecture 14 (AVI)]]. To play the Lecturnity recordings (.lpd files) you need the [[http://www.lecturnity.de/de/download/lecturnity-player|Lecturnity Player, which you can download here]]. I put the Camtasia recordings as .avi files, which you can play with any ordinary video player; I would recommend [[http://www.videolan.org/vlc|VLC]]. |
Line 7: | Line 7: |
Here are PDFs of the exercise sheets so far: [[attachment:SearchEnginesWS0910/exercise-1.pdf|Exercise Sheet 1]], [[attachment:SearchEnginesWS0910/exercise-2.pdf|Exercise Sheet 2]], [[attachment:SearchEnginesWS0910/exercise-3.pdf|Exercise Sheet 3]], [[attachment:SearchEnginesWS0910/exercise-4.pdf|Exercise Sheet 4]], [[attachment:SearchEnginesWS0910/exercise-5.pdf|Exercise Sheet 5]], [[attachment:SearchEnginesWS0910/exercise-6.pdf|Exercise Sheet 6]], [[attachment:SearchEnginesWS0910/exercise-7.pdf|Exercise Sheet 7]], [[attachment:SearchEnginesWS0910/exercise-8.pdf|Exercise Sheet 8]], [[attachment:SearchEnginesWS0910/exercise-9.pdf|Exercise Sheet 9]], [[attachment:SearchEnginesWS0910/exercise-10.pdf|Exercise Sheet 10]], [[attachment:SearchEnginesWS0910/exercise-11.pdf|Exercise Sheet 11]], [[attachment:SearchEnginesWS0910/exercise-12.pdf|Exercise Sheet 12]]. | Here are PDFs of the exercise sheets so far: [[attachment:SearchEnginesWS0910/exercise-1.pdf|Exercise Sheet 1]], [[attachment:SearchEnginesWS0910/exercise-2.pdf|Exercise Sheet 2]], [[attachment:SearchEnginesWS0910/exercise-3.pdf|Exercise Sheet 3]], [[attachment:SearchEnginesWS0910/exercise-4.pdf|Exercise Sheet 4]], [[attachment:SearchEnginesWS0910/exercise-5.pdf|Exercise Sheet 5]], [[attachment:SearchEnginesWS0910/exercise-6.pdf|Exercise Sheet 6]], [[attachment:SearchEnginesWS0910/exercise-7.pdf|Exercise Sheet 7]], [[attachment:SearchEnginesWS0910/exercise-8.pdf|Exercise Sheet 8]], [[attachment:SearchEnginesWS0910/exercise-9.pdf|Exercise Sheet 9]], [[attachment:SearchEnginesWS0910/exercise-10.pdf|Exercise Sheet 10]], [[attachment:SearchEnginesWS0910/exercise-11.pdf|Exercise Sheet 11]], [[attachment:SearchEnginesWS0910/exercise-12.pdf|Exercise Sheet 12]], [[attachment:SearchEnginesWS0910/exercise-13.pdf|Exercise Sheet 13]], [[attachment:SearchEnginesWS0910/exercise-14.pdf|Exercise Sheet 14]]. |
Line 9: | Line 9: |
Here are your solutions and comments on the previous exercise sheets: [[SearchEnginesWS0910/ExerciseSheet1|Solutions and Comments 1]], [[SearchEnginesWS0910/ExerciseSheet2|Solutions and Comments 2]], [[SearchEnginesWS0910/ExerciseSheet3|Solutions and Comments 3]], [[SearchEnginesWS0910/ExerciseSheet4|Solutions and Comments 4]], [[SearchEnginesWS0910/ExerciseSheet5|Solutions and Comments 5]], [[SearchEnginesWS0910/ExerciseSheet6|Solutions and Comments 6]], [[SearchEnginesWS0910/ExerciseSheet7|Solutions and Comments 7]], [[SearchEnginesWS0910/ExerciseSheet8|Solutions and Comments 8]], [[SearchEnginesWS0910/ExerciseSheet9|Solutions and Comments 9]], [[SearchEnginesWS0910/ExerciseSheet10|Solutions and Comments 10]], [[SearchEnginesWS0910/ExerciseSheet11|Solutions and Comments 11]]. | Here are your solutions and comments on the previous exercise sheets: [[SearchEnginesWS0910/ExerciseSheet1|Solutions and Comments 1]], [[SearchEnginesWS0910/ExerciseSheet2|Solutions and Comments 2]], [[SearchEnginesWS0910/ExerciseSheet3|Solutions and Comments 3]], [[SearchEnginesWS0910/ExerciseSheet4|Solutions and Comments 4]], [[SearchEnginesWS0910/ExerciseSheet5|Solutions and Comments 5]], [[SearchEnginesWS0910/ExerciseSheet6|Solutions and Comments 6]], [[SearchEnginesWS0910/ExerciseSheet7|Solutions and Comments 7]], [[SearchEnginesWS0910/ExerciseSheet8|Solutions and Comments 8]], [[SearchEnginesWS0910/ExerciseSheet9|Solutions and Comments 9]], [[SearchEnginesWS0910/ExerciseSheet10|Solutions and Comments 10]], [[SearchEnginesWS0910/ExerciseSheet11|Solutions and Comments 11]], [[SearchEnginesWS0910/ExerciseSheet12|Solutions and Comments 12]], [[SearchEnginesWS0910/ExerciseSheet13|Solutions and Comments 13]]. |
Line 11: | Line 11: |
Here are our master solutions: [[attachment:SearchEnginesWS0910/solution-midterm.pdf|Master solution for Mid-Term Exam]],[[attachment:SearchEnginesWS0910/solution-9.pdf|Master solution for Exercise Sheet 9]], [[attachment:SearchEnginesWS0910/solution-10.pdf|Master solution for Exercise Sheet 10]]. The recordings of all lectures are now available, see above. Lecture 2 is missing because we had technical problems there. To play the Lecturnity recordings (.lpd files) you need the [[http://www.lecturnity.de/de/download/lecturnity-player|Lecturnity Player, which you can download here]]. I put the Camtasia recordings as .avi files, which you can play with any ordinary video player; I would recommend [[http://www.videolan.org/vlc|VLC]]. |
Here are our master solutions: [[attachment:SearchEnginesWS0910/solution-6.pdf|Master solution for Exercise Sheet 6 (only Exercise 4)]], [[attachment:SearchEnginesWS0910/solution-midterm.pdf|Master solution for Mid-Term Exam]],[[attachment:SearchEnginesWS0910/solution-9.pdf|Master solution for Exercise Sheet 9]], [[attachment:SearchEnginesWS0910/solution-10.pdf|Master solution for Exercise Sheet 10]], [[attachment:SearchEnginesWS0910/solution-11.pdf|Master solution for Exercise Sheet 11]], [[attachment:SearchEnginesWS0910/solution-12.pdf|Master solution for Exercise Sheet 12]]. |
Line 17: | Line 15: |
[[SearchEnginesWS0910/MidTermExam|Here is everything about the mid-term exam]]. | Here is everything about the [[SearchEnginesWS0910/MidTermExam|mid-term exam (December 18, 2009, a pure trial exam which did not count for anything)]] and the [[SearchEnginesWS0910/FinalExam|final exam (March 12, 2010, the real thing which accounted for most of the mark)]]. |
Line 19: | Line 17: |
[[attachment:dblp.txt|Here is the file for the Exercise Sheet 11]]. It's a text file, where each line contains the name of the conference (in capital letters), followed by a TAB (ASCII code 9), followed by the title. There are three different conferences: STOC (2423 titles), SIGIR (2372 titles), and SIGGRAPH (1835 titles). The total number of titles / lines is 6630. The exact file size is 454365 bytes. [[SearchEnginesWS0910/ExerciseSheet12|Here is the table with the links to your uploaded solutions for Exercise Sheet 12]]. The deadline is Thursday 04Feb10 16:00. == Questions and comments about Exercise Sheet 12 below this line (most recent on top) == Hi Dragos, my answer comes a bit late, but maybe it helps you to read the proof sketch of Theorem 4.1 of the NIPS paper by Jon Kleinberg: http://www.cs.cornell.edu/home/kleinber/nips15.pdf . There the same thing is said in different words. '''Hannah 4Feb10 14:24''' Hello. I have been thinking over exercise 4 for a couple of hours now, and I still can't fully understand it. This is probably because I can't see where is the trick. Is it the fact that we shrink distances in X1 and X2, but keep all other distances the same? Another thing is that I can't picture in my mind how could it be that by doing so, the centroid that was in Y, now is in X1 while the distances are preserved between the two!? I'm probably confused right now ... but maybe you could somehow explain what happens in a more clear way.. But I'm not sure that it is possible:). '''Dragos 10.45 04 February''' Here is a more concrete hint for Exercise 4: For X = X1 u X2 pick m elements with distance <= r to each other. For Y pick gamma * m points with distance <= epsilon < r to each other, for some small gamma > 0. The distance from any point in X to any point in Y is at least r + delta, for some small delta > 0. Now choose gamma, r, epsilon, and delta appropriately so that you can show what was formulated in the other hint. '''Hannah 3Feb10 23:10''' Hi Björn + all: you are right, that is not well formulated. What I meant is that these are the optimal centroids, that is, the one which (globally) minimize the RSS. With a suitable choice of initial centroids, 2-means will then converge towards these centroids and the corresponding clustering. So to reformulate the hint: ''Construct a point set with three subsets X1, X2 and Y . Then carefully define distances between all these points such that the optimal centroids are one in X1 u X2 and one in Y. But when the points within X1 and within X2 are moved closer together (with all other distances remaining the same) then the optimal centroids are one in X1 and one in X2.'' '''Hannah 3Feb10 22:52''' Hmm regarding exercise 4. How does 2-means "pick a centroid" somewhere. Doesn't that always depend on the current situation / last centroid? I'm a bit confused. Are we supposed to find a movement (bringing points from those subsets x1, x2 and y closer together) where points will be clustered differently according to the old centroids? Or different clustering according to centroids that yield optimal RSS values (aren't they hard to find for non-trivial cases?)? Sorry for the confusion, but I don't seem to be able to figure it out by myself at the moment. '''Björn 3Feb10 21:56''' Thanks for the response. Yes, I meant values of M, of course. I don't know why I wrote k - just a typo. '''Björn 3Feb10 20:58''' BTW, I am really happy that you guys do not just mechanically do the exercises and write "my RSS value is X", but that you actually wonder what the results mean and whether things make sense. That is great research spirit, and that's how one learns the most! '''Hannah 3Feb10 20:23''' Hi Björn + all: I think you made some good observations here. Indeed, the RSS becomes smaller as you make M larger, for exactly the reasons you explained. Which means that is doesn't really make sense to compare RSS values for different values of M. Note that the same would have happened in the vector space model, with each text represented as a normalized M-dimensional vector, where M = the dimension = number of features considered, and the similarity measure being the dot produce = the cosine similarity. The components of M-dimensional normalized vectors also become smaller, as M increases, and so does the cosine similarity (assuming that the number of non-zero components in a vector is more or less a constant). But this does not matter, what is important is that for a *fixed* value of M, the RSS value reflects the quality of the clustering, and that it does. BTW, in what you wrote, you mean M = 6 or 7 and not k = 6 or 7, right? Your observation about a too large M having a negative influence on the clusters is also very reasonable. '''Hannah 3Feb10 20:19''' |
[[SearchEnginesWS0910/ExerciseSheet14|Here is the table with the links to your uploaded solutions for Exercise Sheet 14]]. The deadline is Thursday 18Feb10 16:00. |
Line 42: | Line 20: |
I just made the following observation: Using what Marjan explained (RSS gets in fact higher, because JD is similarity), I recognized that my RSS does gets quite high for small/medium M (like 10), while it does not for larger M. Looking at the jaccard distance / similarity, I observe that having one really large set (my centroid) that is compared to a small one (title), I necessarily get a huge union while my intersection can never exceed the size of the smaller set. Hence, I end up with low JD values and therefore low RSS. I achieved highest rss results for k=6 or 7 which also seems to be a reasonable guess of the avg title length. On the other hand, I am aware that the decrease in the RSS value affects all distances and the clustering might in fact get better. But how exactly is the RSS still meaningful like this? Thanks for any response and I hope I didn't get anything completely wrong. PS: Too large M (1000+) also have a negative influence on the clustering. I assume that's because adding a word that occurs once in one title does not really make the text a better "middle" '''Björn 3.2. 20:02''' |
== More general questions and comments == |
Line 45: | Line 22: |
Sorry Florian, it's a messy week for us. Now it's there. '''Marjan 03Feb10 18:39''' | Yes, it's on Wednesday, March 31, 2 - 3 pm in my office (building 51, second floor, room 28). '''Hannah 20Mar10 21:09''' |
Line 47: | Line 24: |
The link to upload the solutions is again still missing :) '''Florian 03Feb10 18:29''' | Hi, have you decided when and where the exam review will be, yet? '''Marius Mar30th2010 08:50 p.m.''' |
Line 49: | Line 26: |
Hi Eric + all: Taking the average is fine. '''Marjan 3Feb10 18:15''' | Your marks for the final exam and your overall mark for the course are now available on your personal page. You will have the opportunity to look at your exams on a certain day, which we haven't fixed yet. We will tell you (via this page) when we have fixed that date. '''Hannah 15Mar10 00:39''' |
Line 51: | Line 28: |
I am not sure about this, but precision and recall are (contrary to RSS) values available for each cluster, that means every cluster has it's own precision and recall. How shall we compute the classification's prec&rec? Taking the average? Or did I miss something? '''Eric 3Feb10 16:50''' |
Yes, HS026 in building 101. '''Hannah 12Mar10 8:45am''' |
Line 54: | Line 30: |
Hi Mirko, good question. The approach you suggest looks fine to me. An alternative, very simple, yet also reasonable approach would be to simply assign the text to a random centroid in this case. '''Hannah 2Feb10 20:42''' | Does the HS mean same 101 building? I am still new in Freiburg ;). '''Paresh 12 Mar 10 07:45''' |
Line 56: | Line 32: |
Hi Johannes, I would't call it a problem, since you have solved the exercise and made an interesting observation. Uncharacteristic clusters is indeed something that can happen. But it's of course an interesting question, and the one you are asking I guess, how one can get a better clustering and if that is possible with k-means at all. Well, I don't know myself, since I haven't implemented it for this data set. But one thing I would certainly try in this situation is to start with centroids which are just the "average" (in the sense defined on the exercise sheet) of all titles from that class. This is of course not unsupervised anymore, but it would be interesting to see whether at least with such a strong hint k-means is able to find a good clustering. If yes, than what you observed would just be an unfavorable local maximum. If you try the choice of centroids I just explained, please report on the results, I am curious. '''Hannah 2Feb10 20:40''' | I am sorry that we have not managed to produce the master solutions for exercise sheets 13 and 14 yet. However, Lecture 13 is still relevant for the exam (clustering, part 2), but I can tell you that there will be ''no'' task about Lecture 14 (statistical hypothesis testing). I think that's fair, because there was no tutorial for the (last) exercise sheet 14. Exercise sheet 14 will count as a normal excercise sheet, however. '''Hannah 11Mar10 15:21''' |
Line 58: | Line 34: |
To which cluster should we assign the the texts which have JaccardDistance=0 for every centroid? In my implementation they just get assigned to the first centroid but the thing is that, from the__ __JaccardDistance, they belong to none of the centroids. This 'manipulates' the clusters. My idea is to just ignore these texts until there is a centroid with JaccardDistance > 0, is this okay? '''Mirko 2Feb10, 20:35''' | Hi, since there are still no master solutions for sheets 13-14, I assume the contents of the lecture concerning these sheets are not relevant for the exam. '''Marius Mar11th 2:23 p.m.''' |
Line 60: | Line 36: |
I have the problem, that the clusters become too average in the first rounds and no meaningful clustering happens from this point on. In the first round, the distance is 1.0 or 0.99 for almost each title-centroid pair. This leads to very uncharacteristic centroids. '''Johannes 2Feb10 20:30''' | To Johannes + all: you are not allowed to bring any computing devices whatsoever and you won't need them. If there is a task which requires a calculation that is unreasonable to do by hand (like the log_2(10/7) from the mid-term exam), we will tell you what it is or an approximation to work with (for example that you can take log_2(10/7) as 0.5). '''Hannah 10Mar10 20:42''' |
Line 62: | Line 38: |
Oh yes, what Marjan wrote is actually very important, and I didn't have it on my mind when I wrote the exercise sheet. K-means won't do something particularly meaningful if in each assignment step you assign each point / record to the *farthest* cluster. And it makes sense that it alternates between two clusterings at some point: assuming that the centroids don't change anymore, then after you assign a point to its *farthest* centroid, then in the next step the centroid to which it has been assigned before will be it's farthest centroid in the next step. Cool stuff. '''Hannah 2Feb10 20:19''' | '''Exam and portable calculators''': "2. You are not allowed to use any computing devices, mobile phones, etc." I had some problems with pen, paper and sqrt(1080). May we bring calculators? '''Johannes 2010-03-10T20:27''' |
Line 64: | Line 40: |
To Hannah: I tried 10000 for M and the RSS value ist still sometimes increasing and sometimes decreasing. I'll change my program according to Marjan's observation now and report again... EDIT: Ok after changing this everything seems to make sense again, the RSS value is now constantly increasing until it does not change anymore! '''Florian 02Feb10 20:10''' | Thanks, the solution for sheet6 ex4 helped us a lot! '''björn''' |
Line 66: | Line 42: |
To all: Just a small but important observation. The "Jaccard distance" from the exercise sheet is in fact a similarity (between two texts), meaning that your RSS should in fact grow. This also means that you should pick the cluster centroids for each text with the maximum value of the Jaccard similarity. '''Marjan 02Feb10 19:46''' | @Jonas: thanks for the comment, I have corrected it in the master solution. @Björn: I added a master solution for Exercise Sheet 6 (only Exercise 4), linked above, with what I think is a very short and simple proof. Tell me if you find anything wrong with it. '''Hannah 10Mar10 16:40''' |
Line 68: | Line 44: |
Sure, this is a meaningful way to pick the initial centroids, too. However, there is no guarantee at all, that most SIGIR papers will be assigned to the cluster for which the initial centroid was a SIGIR paper, simply because the title of that paper might have been very untypical for SIGIR. So you might end up with very low precision and recall, although for another assignment of the clusters to the labels / classes, precision and recall might actually be very good. '''Hannah 2Feb10 19:32''' | Jonas: Yes, that was already mentioned in the tutorials. '''Marjan 10Mar10 15:58''' |
Line 70: | Line 46: |
Hi, is it also allowed to choose the initial centroids as follows: first centroid = random SIGGRAPH text, second centroid = random SIGIR text, third centroid = random STOC text? Then we would also know which centroid belongs to which class. '''Mirko 2Feb 19:27''' | Hi. Concerining exercise sheet 10 exercise 1. Shouldn't you take the squareroots of 108 and 10 (in the Matrix EPSILON). Otherwise the equation is not right. '''Jonas 10.03.10''' |
Line 72: | Line 48: |
Concerning the RSS values: I haven't written the program myself, but Marjan promised to do so soon. But here is some intuition. First, I would also output the RSS value divided by the number of titles (6630), and the square root of that. Let's call this number J. This gives you the average Jaccard distance of a text to its centroid. What is this J on average? Well, the number of words in your centroids quickly becomes M. In the perfect case, all the words of each text are contained in its nearest centroid. Then the Jaccard distance is text length / M, and so J should be average text length / M. '''Hannah 2Feb10 19:13''' | Hi, we got a question concerning ex sheet 6, exercise 4. In the tutorial Marjan presented a solid, but complicated solution using Taylor Expansion. In the lecture you mentioned that this wasn't necessary for any exercise. Unfortunately we fail at finding a simpler, but still mathematical rigorous solution. Would you please give a brief idea of how to proove such inequalities as this might by useful for similar, yet easier exercises in the exam. '''Björn Mi 15:12''' |
Line 74: | Line 50: |
About computing precision and recall: yes, you need an assignment of the clusters to the three classes SIGGRAPH, SIGIR, STOC. I would do that assignment as follows. Pick the first cluster in your numbering of clusters. Find the most frequent label in the cluster (you have the labels of all titles from dblp.txt). Assign the cluster to that label. Then pick the second cluster, and find the most frequent of the remaining labels, two in this case. And so on, where for three clusters there is no "and so on", because you have only one choice of label left for the third cluster. Once you have a 1-1 assignment from clusters to labels, it should be clear how to compute precision and recall (same definitions as in the case of classification). '''Hannah 2Feb10 18:57''' | Hi Johannes + all. Here is a very simple example: let the query word be ''algorithm'' and one candidate similar word computed by the permuted lexicon be ''algXXXthm'' (the common prefix is ''thmalg'' [from the permutations ''thmalgori'' and ''thmalgXXX''] which is long enough) and let the edit distance threshold be 2. Obviously this candidate word will be filtered out because the edit distance is 3. '''Marjan 07Mar10 18:57''' |
Line 76: | Line 52: |
I had negative steps (as asked below), until I found, that I had a empty cluster. Fixing that removed the problem. '''Johannes 2763-02-02T1834''' | '''Filtering with a Permutern Index''': The slide states: "for all matches thus found, compute the actual edit distance". Is there a simple strawman-example for a word that gets removed in the postfiltering-step? (Today is silly question day.) '''Johannes 2010-03-07T18:26''' |
Line 78: | Line 54: |
To exercise 3: Can you give an intuition, in what range the RSS-value should be (approx.)? My RSS-values seems very high to me...Additionally, I have the same question like Matthias few lines ago: How can we calculate recall and precision - without knowing which cluster belongs to which conference? '''Claudius 2Feb10 17:23''' | Hi Johannes + all. Concerning your inverted index question: it really depends on the application, if you have lists of only doc ids and want to intersect them fast, you would sort the lists by doc id, if you want to do top-k you would sort them by score. Duplicates only make sense when you also store positional information, which we didn't do in the lecture. Concerning your Elias-Gamma question: there is an upper bound, which I think we also derived in the lecture, and that is log n + O(log^(k) n) + O(1), but I couldn't tell you what are the constants hidden in the two Big-Ohs. '''Hannah 7Mar10 18:19''' |
Line 80: | Line 56: |
Thanks for the answer, Matthias, please also try with M = 10000 (if that is feasible time-wise, otherwise take M a little smaller) and report again, thanks. '''Hannah 2Feb10 16:43''' | '''Inverted indexes and like''': If a inverted index maps a word, w, (perhaps a string) to a subset, W(w), of the set of all documents (perhaps only the IDs as numbers). Is W(w) always sorted? Does it contain duplicates? For some application (and the algorithms for them) this seems to matter. I'm just asking in case of a exam task, involving coding (especially k-way-merge). '''Johannes 2010-03-07T13:54''' |
Line 82: | Line 58: |
To Hannah: Yes, for me it does happen for large Ms as well. I tried 10, 100, 1000 with the same quantitative effects. However, for larger M, the RSS value decreases in size. The RSS values are rising&falling in a very small range of about 5%. Do we have an error here or is the distance function + recalculation step given simply not convergent? '''Matthias''' | '''Elias-Gamma Encoding''': Is there a closed form for the length of the code for an integer x when elias is iterated k times? '''Johannes 2010-03-07T15:14''' |
Line 84: | Line 60: |
To Matthias + Florian + all: Which values of M have you tried? Does it still happen when you choose M large enough, say 1000, or even larger? Note that the cluster centroids cannot be meaningful if M is not large enough. '''Hannah 2Deb10 16:21''' | == Questions and comments about the master solution of the mid-term exam == |
Line 86: | Line 62: |
I'm noticing exactly the same behavior like Florian. After a few iterations, the RSS value is alternating esp. when using small M. Also, the RSS values do not fall, they are more or less jumping around the same base value all the time. | '''Johannes 2010-03-07T12:40''' : |
Line 88: | Line 64: |
Also, in order to calculate Recall + Precision, we have to define which of the clusters is which conference. How should we do that? Just by counting the most frequent conference in a cluster based on the known ground truth information? '''Matthias''' | '''1.3''': CLAIM: If an encoding is prefix-free, then there is no code that is a prefix of a different code. Does this claim hold? If so, then 001 mustn't be a code, since 0 is a code and a prefix of 001. Is this right? |
Line 90: | Line 66: |
To Alex: You should compute the k-means clustering on the whole set, that's the point of clustering anyway (no train sets). '''Marjan 02Feb10 14:39''' | There was an obvious mistake which I now corrected (00 should be mapped to 1, not 0). '''Hannah 7Mar10 12:56''' |
Line 92: | Line 68: |
Should we compute k-means on the complete set or should we divide the set in a training and in a test set, like the exercise before?'''Alex 02Feb10 14:10''' | '''1.4''': It states: "For a sequence of length n, we need to generate n/2 such codes [...]." Does not each symbol of the n from the sequence get encoded? |
Line 94: | Line 70: |
Hi Florian + all. That's a very valid question, although counterintuitive from what we expected. To give a precise answer I'll have to write the program myself but since I am in a middle of a deadline I can only promise that I will come back to it asap. My advice is to try to understand why the RSS value increases - either by debugging your own program or by writing a toy example by hand and then reporting here again. '''Marjan 01Feb10 22:41''' | Each code stands for two bits at a time, so for a sequence of n bits, you have to generate n/2 codes. I replaced ''sequence of length n'' by ''sequence of n bits'' to make this clearer. '''Hannah 7Mar10 12:58''' |
Line 96: | Line 72: |
The RSS values of my clustering often get higher after an iteration step, is that normal for this kind of data? Furthermore my RSS value never stops changing, it always switches between two different values after a few iterations (I now just choose three random titles as the initial centroids). I also do not understand how I can compute the precision and recall values of the clustering with respect to the "perfect" clustering. '''Florian 01Feb10 22:41''' | '''3.4''': The function returns the number of common k-grams (as far as I see). Can the return-line be completed with a call to the function from 3.2 to return the Jaccard-distance? Yes, indeed, I replaced ''return l'' by ''return jaccardDistance(x, y, k, l)''. '''Hannah 7Mar10 13:01''' |
Line 98: | Line 76: |
To Florian, Eric + all: yes, sure, the centroids must be of the same type as the things you cluster. And since the things you should cluster are strings / texts (the titles from the dblp.txt from the last lecture), the centroids also have to be strings / texts. There are many ways how you could pick the initial centroids. One way would be to just pick k random titles from the dblp.txt as centroids. '''Hannah 1Feb10 20:55''' | '''5.4''': Does the top-k-algorithm return the top k documents? If so, which k had to been used in this task? What exactly is the condition for stopping? What exactly is the update rule for the ranges? My idea is that (for a fixed document) the minimum is always the known minimum from any of the lists and the maximum is always the (already known) minimum plus the lowest score, seen in any list different than the one the minimum is from. In case of only two lists there may be some simplifications. |
Line 100: | Line 78: |
As I understand it the centroids have to be strings as well and the distance of a text to the centroid is just jaccard distance of their distinct words. The new centroids can then just be created by using the m most frequent occuring words in all texts which are assigned to this centroid as statet in exercise 1. The only problem then would be to get the initial centroids... '''Florian 01Feb10 20:50''' | The task asked for the ''top-ranked document'', so k = 1. We can stop when the upper bound for all documents not yet seen is ''strictly'' below the k-th largest lower bound so far, and when the score ranges for the documents already seen are such that it is clear which are the top-k documents and in which order. If there are ties, and we don't care how they are broken, and we don't care to know the order of the top-k documents, we can sometimes stop earlier. Does this answer all your questions? '''Hannah 7Mar10 13:06''' |
Line 102: | Line 80: |
As I understood we may chose random numbers between 0 and 1, since this is what the jaccard distance will lead us to. But my next question is: how should we determine the distance of ONE text to it's centroid? (while initially assigning them to clusters) '''Eric 01Feb10 20:38''' | Thanks a lot for your comments! Please go on if you have more. '''Hannah 7Mar10 13:07''' |
Line 104: | Line 82: |
Hi, how should we choose our initial centroids for the k-means algorithm with the strings? '''Florian 01Feb10 17:56''' | Thanks a lot for your answers! '''Johannes 2010-03-07T13:44''' == Questions and comments about Exercise Sheet 14 below this line (most recent on top) == Hi Johannes: why don't you start with the first few questions, and then let's see whether it makes sense to continue this via the Wiki, or via private email, or via a meeting in person. '''Hannah 6Mar10 17:36''' Yes, the final exam is like the mid-term exam in this respect. '''Hannah 6Mar10 17:36''' Alex: http://vulcano.informatik.uni-freiburg.de/wiki/teaching/SearchEnginesWS0910/MidTermExam, so it seems to be allowed. '''Mirko, 6Mar10 16:10''' Hi, I was wondering, will the exam next week also be an open book exam like the mid-term? Perhaps I overlooked it, but I don't think this is stated anywhere yet. '''Alex 6Mar10 13:49''' I have lots of questions and don't know where to put them. I suppose this wiki-page will get chaotic pretty fast if I post 20 questions. '''Johannes VI Mar MMX 12:00''' I'm sorry for the delay with the master solutions. I am at a conference right now but will try to make progress with this over the weekend. '''Hannah 4Mar10 23:59''' Do we get master solutions for ex. 11, 12, 13 and 14? '''Johannes 04Mar2010 23:32 ZULU''' Now they're there again. '''Marjan 01Mar18:09''' ARGH! I'm very sorry. My Down-Them-All Plugin for Firefox seems to have deleted all the lecture PDFs! Sorry for that. Rollback to previous versions does not seem to work. I hope, someone has already downloaded them all and is able to restore them! SORRY! Interesting, I've got the rights to delete something from the main page, though. '''Marius Mar 1st 2010 2:38 p.m.''' (Reminder:) Hello, the master solutions are not online, yet. '''alex n 1Mar10 11:08''' Yes, we are working on it. Please remind us again if they aren't online by the end of this week. '''Hannah 23Feb10 14:30''' Do we get master solutions for ex. 11, 12, 13 and 14? '''Johannes 23Feb10 14:05''' Hi Matthias, yes, Pr(A) = 1 - Pr(not A), for any event A, and so for any random variable X, Pr(X <= x) = 1 - Pr(X > x), because X <= x and X > x are complementary events. For continuous random variables (like variables with a normal distribution), the difference between <= and < and >= and > is immaterial, because Pr(X = x) for each fixed x. But anyway, to compute the probability, you first have to transform it a bit, like I did in the lecture, and then obtain Pr(N(0,1) >= sqrt(n1) * (µ1 - µ) / σ) and Pr(N(0,1) <= sqrt(n2) * (µ - µ2) / σ). To evaluate the latter you can also simply use the symmetry of the normal distribution, due to which one has Pr(N(0,1) <= -x) = Pr(N(0,1) >= x). '''Hannah 18Feb10 12:58''' Hi, how can we compute Pr(N(n2 * µ2, n2 * σ^2^) <= n2 * µ2 ? Can we use 1- (Pr(N(n2 * µ2, n2 * σ^2^) >= n2 * µ2) for that ? '''Matthias 18Feb10 12:01''' Hi Florian + all, one of µ1 and µ2 is larger than µ and one is smaller. Let's assume µ1 is larger and µ2 is smaller. Then for µ1 you have to look at Pr(N(n1 * µ, n1 * σ^2^) >= n1 * µ1). But for µ2 you have to look at Pr(N(n2 * µ2, n2 * σ^2^) <= n2 * µ2). Note the <= instead of the >= for the second probability. Recall the meaning of these probabilities. Just as an example, let µ be 100 and µ1 be 150 and µ2 be 50. Then the first probability means: what is the probability that I see a mean of ''150 or more'' in my first sample, although the mean of my distribution is 100. The second probability means: what is the probability that I see a mean of ''50 or less'' in my second sample, although the mean of my distribution is 100. If you take both <= or both >= for both probabilities, it is to be expected that you get two completely different probabilities, one very low and one very high (except when they are both close to 50%). Please ask again if this is still unclear. '''Hannah 17Feb10 21:51''' Sorry, with probability for µ1 I meant Pr(N(n1 * µ, n1 * σ^2^) >= n1 * µ1) and accordingly with probability for µ2 I meant Pr(N(n2 * µ, n2 * σ^2^) >= n2 * µ2) where n1=n2 for the exercise sheet. '''Florian 17Feb10 21:18''' Hi Florian, what exactly do you mean by ''probability for µ1'' and ''probability for µ2''? '''Hannah 17Feb10 21:02''' Hi, what values are we expected to get for exercise 4? I always get a probability of about 99.9% for μ1 and a value of about 0.07% for μ2, can that be? '''Florian 17Feb10 18:25''' Hi Florian, yes, the ''averages'' in Exercise 3 should be ''average running times''. I uploaded a new version of the sheet, where I corrected this. '''Hannah 14Feb10 17:48''' Hi, I guess we should measure the running times to determine the efficiency of the programs for exercise 3? '''Florian 15Feb10 17:42''' Hi Claudius, you should compute Pr(D|H0), exactly as done in the lecture for Example 2, where we computed this probability as Pr(X > x), where X is a random variable with distribution N(0,1), that is, normal with mean 0 and variance 1, and x depends on the mean and variance of your data. '''Hannah 14Feb10 16:44''' Hi. If I have understood correctly, we have to compute Pr(H|D) in Exercise 4. From statistical hypothesis testing, we get Pr(D|H). Now, Pr(H|D) = Pr(D|H) * (Pr(H) / Pr(D)). We know Pr(D|H) and we can compute Pr(D), but what value do we have to use for Pr(H)? '''Claudius 14Feb10 14:41''' Hi Eric, I don't care whether you use integers or doubles, but I am curious why the one should be any harder than the other? '''Hannah 12Feb10 19:02''' May we use integers for sorting? Or do we have to use doubles? This is important for generating my sorted array '''Eric 12Feb10 18:56''' If you're asking about the merging you can of course use a priority queue if you want, but you don't really need it when merging 2 lists. '''Marjan 18:28''' Why would you use a priority queue? It's simple sorting, the exercise is not about implementing your own sorting algorithm or something like that. About exercise 3, it should be clear from the exercise itself that the sequences should be sorted (otherwise how can the merging work?) '''Marjan 18:23''' Means that we have nothing to do than use a priority queue or something like that and don't have to implement the sorting? And at Exercise 3 the random set should be an ordered one or not? '''Alex 12Feb10 18:19''' We prefer randomized sorting using bitonic networks, alternatively combined with LSD radix sort or simple pancake sort. That's of course a joke, it should be clear that you can use the built-in sorting functions (your own implementation will be certainly slower). '''Marjan 12Feb10 18:12''' What does "do a standard sort" in exercise 2 mean? Shall I implement one on my own, or may I use the Java built-in sorting mechanisms? Also, which sorting algorithm do you prefer for this? '''Eric 12Feb10 18:04''' |
Welcome to the Wiki page of the course Search Engines, WS 2009 / 2010. Lecturer: Hannah Bast. Tutorials: Marjan Celikik. Course web page: click here.
Here are PDFs of the slides of the lectures: Lecture 1 (Intro), Lecture 2 (socket communication), Lecture 3 (ranking), Lecture 4 (IO-efficiency & compression), Lecture 5 (list intersection), Lecture 6 (prefix search), Lecture 7 (javascript), Lecture 8 (error-tolerant search), Lecture 9 (programming languages & UTF-8), Lecture 10 (latent semantic indexing), Lecture 11 (naive bayes classification), Lecture 12 (clustering), Lecture 13 (hierarchical clustering), Lecture 14 (hypothesis testing), Projects.
Here are the recordings of the lectures (except Lecture 2, where we had problems with the microphone), LPD = Lecturnity recording: Recording Lecture 1 (LPD), Recording Lecture 3 (LPD), Recording Lecture 4 (LPD), Recording Lecture 5 (LPD without audio), Recording Lecture 6 (LPD), Recording Lecture 7 (AVI), Recording Lecture 8 (AVI), Recording Lecture 9 (AVI), Recording Lecture 10 (AVI), Recording Lecture 11 (AVI), Recording Lecture 12 (AVI), Recording Lecture 13 (AVI), Recording Lecture 14 (AVI). To play the Lecturnity recordings (.lpd files) you need the Lecturnity Player, which you can download here. I put the Camtasia recordings as .avi files, which you can play with any ordinary video player; I would recommend VLC.
Here are PDFs of the exercise sheets so far: Exercise Sheet 1, Exercise Sheet 2, Exercise Sheet 3, Exercise Sheet 4, Exercise Sheet 5, Exercise Sheet 6, Exercise Sheet 7, Exercise Sheet 8, Exercise Sheet 9, Exercise Sheet 10, Exercise Sheet 11, Exercise Sheet 12, Exercise Sheet 13, Exercise Sheet 14.
Here are your solutions and comments on the previous exercise sheets: Solutions and Comments 1, Solutions and Comments 2, Solutions and Comments 3, Solutions and Comments 4, Solutions and Comments 5, Solutions and Comments 6, Solutions and Comments 7, Solutions and Comments 8, Solutions and Comments 9, Solutions and Comments 10, Solutions and Comments 11, Solutions and Comments 12, Solutions and Comments 13.
Here are our master solutions: Master solution for Exercise Sheet 6 (only Exercise 4), Master solution for Mid-Term Exam,Master solution for Exercise Sheet 9, Master solution for Exercise Sheet 10, Master solution for Exercise Sheet 11, Master solution for Exercise Sheet 12.
Here are the rules for the exercises as explained in Lecture 2.
Here is everything about the mid-term exam (December 18, 2009, a pure trial exam which did not count for anything) and the final exam (March 12, 2010, the real thing which accounted for most of the mark).
Here is the table with the links to your uploaded solutions for Exercise Sheet 14. The deadline is Thursday 18Feb10 16:00.
More general questions and comments
Yes, it's on Wednesday, March 31, 2 - 3 pm in my office (building 51, second floor, room 28). Hannah 20Mar10 21:09
Hi, have you decided when and where the exam review will be, yet? Marius Mar30th2010 08:50 p.m.
Your marks for the final exam and your overall mark for the course are now available on your personal page. You will have the opportunity to look at your exams on a certain day, which we haven't fixed yet. We will tell you (via this page) when we have fixed that date. Hannah 15Mar10 00:39
Yes, HS026 in building 101. Hannah 12Mar10 8:45am
Does the HS mean same 101 building? I am still new in Freiburg ;). Paresh 12 Mar 10 07:45
I am sorry that we have not managed to produce the master solutions for exercise sheets 13 and 14 yet. However, Lecture 13 is still relevant for the exam (clustering, part 2), but I can tell you that there will be no task about Lecture 14 (statistical hypothesis testing). I think that's fair, because there was no tutorial for the (last) exercise sheet 14. Exercise sheet 14 will count as a normal excercise sheet, however. Hannah 11Mar10 15:21
Hi, since there are still no master solutions for sheets 13-14, I assume the contents of the lecture concerning these sheets are not relevant for the exam. Marius Mar11th 2:23 p.m.
To Johannes + all: you are not allowed to bring any computing devices whatsoever and you won't need them. If there is a task which requires a calculation that is unreasonable to do by hand (like the log_2(10/7) from the mid-term exam), we will tell you what it is or an approximation to work with (for example that you can take log_2(10/7) as 0.5). Hannah 10Mar10 20:42
Exam and portable calculators: "2. You are not allowed to use any computing devices, mobile phones, etc." I had some problems with pen, paper and sqrt(1080). May we bring calculators? Johannes 2010-03-10T20:27
Thanks, the solution for sheet6 ex4 helped us a lot! björn
@Jonas: thanks for the comment, I have corrected it in the master solution. @Björn: I added a master solution for Exercise Sheet 6 (only Exercise 4), linked above, with what I think is a very short and simple proof. Tell me if you find anything wrong with it. Hannah 10Mar10 16:40
Jonas: Yes, that was already mentioned in the tutorials. Marjan 10Mar10 15:58
Hi. Concerining exercise sheet 10 exercise 1. Shouldn't you take the squareroots of 108 and 10 (in the Matrix EPSILON). Otherwise the equation is not right. Jonas 10.03.10
Hi, we got a question concerning ex sheet 6, exercise 4. In the tutorial Marjan presented a solid, but complicated solution using Taylor Expansion. In the lecture you mentioned that this wasn't necessary for any exercise. Unfortunately we fail at finding a simpler, but still mathematical rigorous solution. Would you please give a brief idea of how to proove such inequalities as this might by useful for similar, yet easier exercises in the exam. Björn Mi 15:12
Hi Johannes + all. Here is a very simple example: let the query word be algorithm and one candidate similar word computed by the permuted lexicon be algXXXthm (the common prefix is thmalg [from the permutations thmalgori and thmalgXXX] which is long enough) and let the edit distance threshold be 2. Obviously this candidate word will be filtered out because the edit distance is 3. Marjan 07Mar10 18:57
Filtering with a Permutern Index: The slide states: "for all matches thus found, compute the actual edit distance". Is there a simple strawman-example for a word that gets removed in the postfiltering-step? (Today is silly question day.) Johannes 2010-03-07T18:26
Hi Johannes + all. Concerning your inverted index question: it really depends on the application, if you have lists of only doc ids and want to intersect them fast, you would sort the lists by doc id, if you want to do top-k you would sort them by score. Duplicates only make sense when you also store positional information, which we didn't do in the lecture. Concerning your Elias-Gamma question: there is an upper bound, which I think we also derived in the lecture, and that is log n + O(log^(k) n) + O(1), but I couldn't tell you what are the constants hidden in the two Big-Ohs. Hannah 7Mar10 18:19
Inverted indexes and like: If a inverted index maps a word, w, (perhaps a string) to a subset, W(w), of the set of all documents (perhaps only the IDs as numbers). Is W(w) always sorted? Does it contain duplicates? For some application (and the algorithms for them) this seems to matter. I'm just asking in case of a exam task, involving coding (especially k-way-merge). Johannes 2010-03-07T13:54
Elias-Gamma Encoding: Is there a closed form for the length of the code for an integer x when elias is iterated k times? Johannes 2010-03-07T15:14
Questions and comments about the master solution of the mid-term exam
Johannes 2010-03-07T12:40 :
1.3: CLAIM: If an encoding is prefix-free, then there is no code that is a prefix of a different code. Does this claim hold? If so, then 001 mustn't be a code, since 0 is a code and a prefix of 001. Is this right?
There was an obvious mistake which I now corrected (00 should be mapped to 1, not 0). Hannah 7Mar10 12:56
1.4: It states: "For a sequence of length n, we need to generate n/2 such codes [...]." Does not each symbol of the n from the sequence get encoded?
Each code stands for two bits at a time, so for a sequence of n bits, you have to generate n/2 codes. I replaced sequence of length n by sequence of n bits to make this clearer. Hannah 7Mar10 12:58
3.4: The function returns the number of common k-grams (as far as I see). Can the return-line be completed with a call to the function from 3.2 to return the Jaccard-distance?
Yes, indeed, I replaced return l by return jaccardDistance(x, y, k, l). Hannah 7Mar10 13:01
5.4: Does the top-k-algorithm return the top k documents? If so, which k had to been used in this task? What exactly is the condition for stopping? What exactly is the update rule for the ranges? My idea is that (for a fixed document) the minimum is always the known minimum from any of the lists and the maximum is always the (already known) minimum plus the lowest score, seen in any list different than the one the minimum is from. In case of only two lists there may be some simplifications.
The task asked for the top-ranked document, so k = 1. We can stop when the upper bound for all documents not yet seen is strictly below the k-th largest lower bound so far, and when the score ranges for the documents already seen are such that it is clear which are the top-k documents and in which order. If there are ties, and we don't care how they are broken, and we don't care to know the order of the top-k documents, we can sometimes stop earlier. Does this answer all your questions? Hannah 7Mar10 13:06
Thanks a lot for your comments! Please go on if you have more. Hannah 7Mar10 13:07
Thanks a lot for your answers! Johannes 2010-03-07T13:44
Questions and comments about Exercise Sheet 14 below this line (most recent on top)
Hi Johannes: why don't you start with the first few questions, and then let's see whether it makes sense to continue this via the Wiki, or via private email, or via a meeting in person. Hannah 6Mar10 17:36
Yes, the final exam is like the mid-term exam in this respect. Hannah 6Mar10 17:36
Alex: http://vulcano.informatik.uni-freiburg.de/wiki/teaching/SearchEnginesWS0910/MidTermExam, so it seems to be allowed. Mirko, 6Mar10 16:10
Hi, I was wondering, will the exam next week also be an open book exam like the mid-term? Perhaps I overlooked it, but I don't think this is stated anywhere yet. Alex 6Mar10 13:49
I have lots of questions and don't know where to put them. I suppose this wiki-page will get chaotic pretty fast if I post 20 questions. Johannes VI Mar MMX 12:00
I'm sorry for the delay with the master solutions. I am at a conference right now but will try to make progress with this over the weekend. Hannah 4Mar10 23:59
Do we get master solutions for ex. 11, 12, 13 and 14? Johannes 04Mar2010 23:32 ZULU
Now they're there again. Marjan 01Mar18:09
ARGH! I'm very sorry. My Down-Them-All Plugin for Firefox seems to have deleted all the lecture PDFs! Sorry for that. Rollback to previous versions does not seem to work. I hope, someone has already downloaded them all and is able to restore them! SORRY! Interesting, I've got the rights to delete something from the main page, though. Marius Mar 1st 2010 2:38 p.m.
(Reminder:) Hello, the master solutions are not online, yet. alex n 1Mar10 11:08
Yes, we are working on it. Please remind us again if they aren't online by the end of this week. Hannah 23Feb10 14:30
Do we get master solutions for ex. 11, 12, 13 and 14? Johannes 23Feb10 14:05
Hi Matthias, yes, Pr(A) = 1 - Pr(not A), for any event A, and so for any random variable X, Pr(X <= x) = 1 - Pr(X > x), because X <= x and X > x are complementary events. For continuous random variables (like variables with a normal distribution), the difference between <= and < and >= and > is immaterial, because Pr(X = x) for each fixed x. But anyway, to compute the probability, you first have to transform it a bit, like I did in the lecture, and then obtain Pr(N(0,1) >= sqrt(n1) * (µ1 - µ) / σ) and Pr(N(0,1) <= sqrt(n2) * (µ - µ2) / σ). To evaluate the latter you can also simply use the symmetry of the normal distribution, due to which one has Pr(N(0,1) <= -x) = Pr(N(0,1) >= x). Hannah 18Feb10 12:58
Hi, how can we compute Pr(N(n2 * µ2, n2 * σ2) <= n2 * µ2 ? Can we use 1- (Pr(N(n2 * µ2, n2 * σ2) >= n2 * µ2) for that ? Matthias 18Feb10 12:01
Hi Florian + all, one of µ1 and µ2 is larger than µ and one is smaller. Let's assume µ1 is larger and µ2 is smaller. Then for µ1 you have to look at Pr(N(n1 * µ, n1 * σ2) >= n1 * µ1). But for µ2 you have to look at Pr(N(n2 * µ2, n2 * σ2) <= n2 * µ2). Note the <= instead of the >= for the second probability. Recall the meaning of these probabilities. Just as an example, let µ be 100 and µ1 be 150 and µ2 be 50. Then the first probability means: what is the probability that I see a mean of 150 or more in my first sample, although the mean of my distribution is 100. The second probability means: what is the probability that I see a mean of 50 or less in my second sample, although the mean of my distribution is 100. If you take both <= or both >= for both probabilities, it is to be expected that you get two completely different probabilities, one very low and one very high (except when they are both close to 50%). Please ask again if this is still unclear. Hannah 17Feb10 21:51
Sorry, with probability for µ1 I meant Pr(N(n1 * µ, n1 * σ2) >= n1 * µ1) and accordingly with probability for µ2 I meant Pr(N(n2 * µ, n2 * σ2) >= n2 * µ2) where n1=n2 for the exercise sheet. Florian 17Feb10 21:18
Hi Florian, what exactly do you mean by probability for µ1 and probability for µ2? Hannah 17Feb10 21:02
Hi, what values are we expected to get for exercise 4? I always get a probability of about 99.9% for μ1 and a value of about 0.07% for μ2, can that be? Florian 17Feb10 18:25
Hi Florian, yes, the averages in Exercise 3 should be average running times. I uploaded a new version of the sheet, where I corrected this. Hannah 14Feb10 17:48
Hi, I guess we should measure the running times to determine the efficiency of the programs for exercise 3? Florian 15Feb10 17:42
Hi Claudius, you should compute Pr(D|H0), exactly as done in the lecture for Example 2, where we computed this probability as Pr(X > x), where X is a random variable with distribution N(0,1), that is, normal with mean 0 and variance 1, and x depends on the mean and variance of your data. Hannah 14Feb10 16:44
Hi. If I have understood correctly, we have to compute Pr(H|D) in Exercise 4. From statistical hypothesis testing, we get Pr(D|H). Now, Pr(H|D) = Pr(D|H) * (Pr(H) / Pr(D)). We know Pr(D|H) and we can compute Pr(D), but what value do we have to use for Pr(H)? Claudius 14Feb10 14:41
Hi Eric, I don't care whether you use integers or doubles, but I am curious why the one should be any harder than the other? Hannah 12Feb10 19:02
May we use integers for sorting? Or do we have to use doubles? This is important for generating my sorted array Eric 12Feb10 18:56
If you're asking about the merging you can of course use a priority queue if you want, but you don't really need it when merging 2 lists. Marjan 18:28
Why would you use a priority queue? It's simple sorting, the exercise is not about implementing your own sorting algorithm or something like that. About exercise 3, it should be clear from the exercise itself that the sequences should be sorted (otherwise how can the merging work?) Marjan 18:23
Means that we have nothing to do than use a priority queue or something like that and don't have to implement the sorting? And at Exercise 3 the random set should be an ordered one or not? Alex 12Feb10 18:19
We prefer randomized sorting using bitonic networks, alternatively combined with LSD radix sort or simple pancake sort. That's of course a joke, it should be clear that you can use the built-in sorting functions (your own implementation will be certainly slower). Marjan 12Feb10 18:12
What does "do a standard sort" in exercise 2 mean? Shall I implement one on my own, or may I use the Java built-in sorting mechanisms? Also, which sorting algorithm do you prefer for this? Eric 12Feb10 18:04