AD Teaching Wiki:

Welcome to the Wiki page of the course Search Engines, WS 2009 / 2010. Lecturer: Hannah Bast. Tutorials: Marjan Celikik. Course web page: click here.

Here are PDFs of the slides of the lectures so far: Lecture 1, Lecture 2, Lecture 3, Lecture 4, Lecture 5, Lecture 6, Lecture 7, Lecture 8, Lecture 9, Lecture 10, Lecture 11, Lecture 12.

Here are the recordings of the lectures so far (except Lecture 2, where we had problems with the microphone), LPD = Lecturnity recording: Recording Lecture 1 (LPD), Recording Lecture 3 (LPD), Recording Lecture 4 (LPD), Recording Lecture 5 (LPD without audio), Recording Lecture 6 (LPD), Recording Lecture 7 (AVI), Recording Lecture 8 (AVI), Recording Lecture 9 (AVI), Recording Lecture 10 (AVI), Recording Lecture 11 (AVI), Recording Lecture 12 (AVI).

Here are PDFs of the exercise sheets so far: Exercise Sheet 1, Exercise Sheet 2, Exercise Sheet 3, Exercise Sheet 4, Exercise Sheet 5, Exercise Sheet 6, Exercise Sheet 7, Exercise Sheet 8, Exercise Sheet 9, Exercise Sheet 10, Exercise Sheet 11, Exercise Sheet 12.

Here are your solutions and comments on the previous exercise sheets: Solutions and Comments 1, Solutions and Comments 2, Solutions and Comments 3, Solutions and Comments 4, Solutions and Comments 5, Solutions and Comments 6, Solutions and Comments 7, Solutions and Comments 8, Solutions and Comments 9, Solutions and Comments 10.

Here are our master solutions: Master solution for Mid-Term Exam,Master solution for Exercise Sheet 9, Master solution for Exercise Sheet 10.

The recordings of all lectures are now available, see above. Lecture 2 is missing because we had technical problems there. To play the Lecturnity recordings (.lpd files) you need the Lecturnity Player, which you can download here. I put the Camtasia recordings as .avi files, which you can play with any ordinary video player; I would recommend VLC.

Here are the rules for the exercises as explained in Lecture 2.

Here is everything about the mid-term exam.

Here is the file for the Exercise Sheet 11. It's a text file, where each line contains the name of the conference (in capital letters), followed by a TAB (ASCII code 9), followed by the title. There are three different conferences: STOC (2423 titles), SIGIR (2372 titles), and SIGGRAPH (1835 titles). The total number of titles / lines is 6630. The exact file size is 454365 bytes.

Here is the table with the links to your uploaded solutions for Exercise Sheet 11. The deadline is Thursday 28Jan10 16:00.

Questions and comments about Exercise Sheet 12 below this line (most recent on top)

Hi Eric + all: Taking the average is fine. Marjan 3Feb10 18:15

I am not sure about this, but precision and recall are (contrary to RSS) values available for each cluster, that means every cluster has it's own precision and recall. How shall we compute the classification's prec&rec? Taking the average? Or did I miss something? Eric 3Feb10 16:50

Hi Mirko, good question. The approach you suggest looks fine to me. An alternative, very simple, yet also reasonable approach would be to simply assign the text to a random centroid in this case. Hannah 2Feb10 20:42

Hi Johannes, I would't call it a problem, since you have solved the exercise and made an interesting observation. Uncharacteristic clusters is indeed something that can happen. But it's of course an interesting question, and the one you are asking I guess, how one can get a better clustering and if that is possible with k-means at all. Well, I don't know myself, since I haven't implemented it for this data set. But one thing I would certainly try in this situation is to start with centroids which are just the "average" (in the sense defined on the exercise sheet) of all titles from that class. This is of course not unsupervised anymore, but it would be interesting to see whether at least with such a strong hint k-means is able to find a good clustering. If yes, than what you observed would just be an unfavorable local maximum. If you try the choice of centroids I just explained, please report on the results, I am curious. Hannah 2Feb10 20:40

To which cluster should we assign the the texts which have JaccardDistance=0 for every centroid? In my implementation they just get assigned to the first centroid but the thing is that, from the JaccardDistance, they belong to none of the centroids. This 'manipulates' the clusters. My idea is to just ignore these texts until there is a centroid with JaccardDistance > 0, is this okay? Mirko 2Feb10, 20:35

I have the problem, that the clusters become too average in the first rounds and no meaningful clustering happens from this point on. In the first round, the distance is 1.0 or 0.99 for almost each title-centroid pair. This leads to very uncharacteristic centroids. Johannes 2Feb10 20:30

Oh yes, what Marjan wrote is actually very important, and I didn't have it on my mind when I wrote the exercise sheet. K-means won't do something particularly meaningful if in each assignment step you assign each point / record to the *farthest* cluster. And it makes sense that it alternates between two clusterings at some point: assuming that the centroids don't change anymore, then after you assign a point to its *farthest* centroid, then in the next step the centroid to which it has been assigned before will be it's farthest centroid in the next step. Cool stuff. Hannah 2Feb10 20:19

To Hannah: I tried 10000 for M and the RSS value ist still sometimes increasing and sometimes decreasing. I'll change my program according to Marjan's observation now and report again... EDIT: Ok after changing this everything seems to make sense again, the RSS value is now constantly increasing until it does not change anymore! Florian 02Feb10 20:10

To all: Just a small but important observation. The "Jaccard distance" from the exercise sheet is in fact a similarity (between two texts), meaning that your RSS should in fact grow. This also means that you should pick the cluster centroids for each text with the maximum value of the Jaccard similarity. Marjan 02Feb10 19:46

Sure, this is a meaningful way to pick the initial centroids, too. However, there is no guarantee at all, that most SIGIR papers will be assigned to the cluster for which the initial centroid was a SIGIR paper, simply because the title of that paper might have been very untypical for SIGIR. So you might end up with very low precision and recall, although for another assignment of the clusters to the labels / classes, precision and recall might actually be very good. Hannah 2Feb10 19:32

Hi, is it also allowed to choose the initial centroids as follows: first centroid = random SIGGRAPH text, second centroid = random SIGIR text, third centroid = random STOC text? Then we would also know which centroid belongs to which class. Mirko 2Feb 19:27

Concerning the RSS values: I haven't written the program myself, but Marjan promised to do so soon. But here is some intuition. First, I would also output the RSS value divided by the number of titles (6630), and the square root of that. Let's call this number J. This gives you the average Jaccard distance of a text to its centroid. What is this J on average? Well, the number of words in your centroids quickly becomes M. In the perfect case, all the words of each text are contained in its nearest centroid. Then the Jaccard distance is text length / M, and so J should be average text length / M. Hannah 2Feb10 19:13

About computing precision and recall: yes, you need an assignment of the clusters to the three classes SIGGRAPH, SIGIR, STOC. I would do that assignment as follows. Pick the first cluster in your numbering of clusters. Find the most frequent label in the cluster (you have the labels of all titles from dblp.txt). Assign the cluster to that label. Then pick the second cluster, and find the most frequent of the remaining labels, two in this case. And so on, where for three clusters there is no "and so on", because you have only one choice of label left for the third cluster. Once you have a 1-1 assignment from clusters to labels, it should be clear how to compute precision and recall (same definitions as in the case of classification). Hannah 2Feb10 18:57

I had negative steps (as asked below), until I found, that I had a empty cluster. Fixing that removed the problem. Johannes 2763-02-02T1834

To exercise 3: Can you give an intuition, in what range the RSS-value should be (approx.)? My RSS-values seems very high to me...Additionally, I have the same question like Matthias few lines ago: How can we calculate recall and precision - without knowing which cluster belongs to which conference? Claudius 2Feb10 17:23

Thanks for the answer, Matthias, please also try with M = 10000 (if that is feasible time-wise, otherwise take M a little smaller) and report again, thanks. Hannah 2Feb10 16:43

To Hannah: Yes, for me it does happen for large Ms as well. I tried 10, 100, 1000 with the same quantitative effects. However, for larger M, the RSS value decreases in size. The RSS values are rising&falling in a very small range of about 5%. Do we have an error here or is the distance function + recalculation step given simply not convergent? Matthias

To Matthias + Florian + all: Which values of M have you tried? Does it still happen when you choose M large enough, say 1000, or even larger? Note that the cluster centroids cannot be meaningful if M is not large enough. Hannah 2Deb10 16:21

I'm noticing exactly the same behavior like Florian. After a few iterations, the RSS value is alternating esp. when using small M. Also, the RSS values do not fall, they are more or less jumping around the same base value all the time.

Also, in order to calculate Recall + Precision, we have to define which of the clusters is which conference. How should we do that? Just by counting the most frequent conference in a cluster based on the known ground truth information? Matthias

To Alex: You should compute the k-means clustering on the whole set, that's the point of clustering anyway (no train sets). Marjan 02Feb10 14:39

Should we compute k-means on the complete set or should we divide the set in a training and in a test set, like the exercise before?Alex 02Feb10 14:10

Hi Florian + all. That's a very valid question, although counterintuitive from what we expected. To give a precise answer I'll have to write the program myself but since I am in a middle of a deadline I can only promise that I will come back to it asap. My advice is to try to understand why the RSS value increases - either by debugging your own program or by writing a toy example by hand and then reporting here again. Marjan 01Feb10 22:41

The RSS values of my clustering often get higher after an iteration step, is that normal for this kind of data? Furthermore my RSS value never stops changing, it always switches between two different values after a few iterations (I now just choose three random titles as the initial centroids). I also do not understand how I can compute the precision and recall values of the clustering with respect to the "perfect" clustering. Florian 01Feb10 22:41

To Florian, Eric + all: yes, sure, the centroids must be of the same type as the things you cluster. And since the things you should cluster are strings / texts (the titles from the dblp.txt from the last lecture), the centroids also have to be strings / texts. There are many ways how you could pick the initial centroids. One way would be to just pick k random titles from the dblp.txt as centroids. Hannah 1Feb10 20:55

As I understand it the centroids have to be strings as well and the distance of a text to the centroid is just jaccard distance of their distinct words. The new centroids can then just be created by using the m most frequent occuring words in all texts which are assigned to this centroid as statet in exercise 1. The only problem then would be to get the initial centroids... Florian 01Feb10 20:50

As I understood we may chose random numbers between 0 and 1, since this is what the jaccard distance will lead us to. But my next question is: how should we determine the distance of ONE text to it's centroid? (while initially assigning them to clusters) Eric 01Feb10 20:38

Hi, how should we choose our initial centroids for the k-means algorithm with the strings? Florian 01Feb10 17:56

Questions and comments about Exercise Sheet 11 below this line (most recent on top)

Hi Johannes, nice way of telling me, and yes, you are of course right, it should be p = h / lambda and q = t / lambda and then p + q = 1 => lambda = h + t. And not p = lambda * h and q = lambda * t and lambda = 1 / (h + t). But believe me, it's very easy to make such stupid mistakes when doing calculations at the (virtual) blackboard. That is why I always ask you guys to pay attention when I am doing calculations, and to correct me if I am doing something wrong. Anyway, also your late feedback is of course appreciated, and I will see how I can correct that thingy on the slides. Hannah 27Jan10 17:34

I have cognitive dissonances from slide 12 from the last lecture. The implications give me an uncomfortable feeling. Johannes 2763-01-27T1704

Hi Matthias, well, you are learning the Pr(W = w|C = c) from the 10% training data (every tenth record), so that's where they should come from. Hannah 27Jan10 16:07

Hi, could you please explain again what data should be used for calculating the highest Pr(W = w|C = c) in excercies 2? The whole data, the remaining 90%, the learning 10% ? Matthias 27Jan10 11:52

A comment for all who haven't submitted their solutions yet (= most). To compute the argmax_c Pr(C = c) * Prod_w Pr(W = w|C = c), better compute the argmax_c of the logarithms, that is, argmax_c log(Pr(C = c)) + sum_w log Pr(W = W|C = c). The result will be the same, because log is a mononotone function. However, computing the sum of the logs is numerical more stable, while computing the product of many small probabilities can lead to numerical problems which can distort results. I don't think it's a big issue for the relatively small data set I gave you, but I would still do it. Anyway, it's not more work computing the sums of logs of things than the product of the things. Hannah 27Jan10 5:19am

Hi Alex + all: I didn't have a program so far, but have just written one, and my overall precision is 83.61%. So classification seems to work pretty well on this dataset. I didn't do anything fancy, used the +1 smoothing to avoid zero probabilities, and used every word. Note that the titles also contain commas, parantheses and stuff. I am saying this because I have seen that some people have words like "(extended" or "abstract)" or "title.". So please do pay attention to that and do not tokenize merely by whitepace. Also, whenever you write a program, test it!!! That is, have a small procedure that outputs the learned probabilities (or better, the counts), and then check them for a small example. I did that as well for my program, otherwise I would never be convinced that it does the correct thing. Hannah 27Jan10 1:10am

Can you give an short hint how high the match rate should be. At my program the detection rate is around 1/3. I think this is a little bit low, but I also found no hint what rate is a good rate for the given set of documents. Edit: I found a bug now the detection rate is around 70%, but the question is still the same, is this a possible result? Alex 26Jan10 18:20

Hi Claudius + all: to get the points, you only have to compute the w with the highest P(W=w|C=c), even if that is a word like "for" or "of". Would be nice and more interesting though, and not really more work, to compute those w with the k highest P(W=w|C=c). For some not too large k, some interesting words should crop up. You can also choose to ignore stopwords altogether as Marjan suggested. Here is a list of English stopwords. Hannah 25Jan10 18:25

Hi Claudius. My recommendation is to ignore stop-words (e.g. the, a, of, is, are etc., for reasons already explained in the lecture) but please wait for a reply from Hannah to be sure. Marjan 25Jan10 14:30

Hi. In Exercise 2, we have to identify the most predictive word for each conference. But, when I take the heighest Pr(W=w|C=c), I get not very predictive words like "for" and "of". Is this sufficient, or should we make an effort, to find words, which are more predictive? Claudius 25 Jan 14:26

Yes, very good question (the second one), I had it on my agenda for the lecture, but somehow forgot to tell you about it. There is a very simple and effective solution to that problem, which you should also use in the exercise. On slide #10, I told you to take Pr(W = w | C = c) = n_wc / sum_w n_wc, where n_wc is the total number of occurrences of word w in class c. Well, just take Pr(W = w | C = c) = (n_wc + 1) / sum_w (n_wc + 1), which can never be zero. Intuitively, this is like saying that every word occurs at least once for each class. Which is also reasonable, because if your amount of data is big enough, that will indeed happen. It's just an artefact of small data that some words don't occur at all for certain classes. Please ask again in case that was not crystal clear. Hannah 24Jan10 21:49

To Florian + all: Of course you should use the Bayes formula to predict the most probable conference (class). The second question is a good one. I think the natural way is to take that probability as zero. Another way (actually the opposite) is to ignore the words that have not appeared in the original training set i.e. assume that they're not relevant for the prediction. Marjan 24Jan10 21:32

I have a question to Exercise 2: I do not quite understand how we should predict the conferences for the remaining records. Should we just decide by looking at the most predictive word and decide with that or should we use the Naive Bayes formula of the slides ( argmax_c Pr(C = c) · Π_i=1,...,m Pr(W_i = w_i | C = c) ). And using the Bayes fomula, how should we handle occuring words that did not occur in the training data? Using zero for their probability makes the whole probability for the conference zero as well which is not very reasonable. Florian 24Jan10 21:20

I have also uploaded the master solutions for exercise sheet 10 now, see the link above. Note that it's just two pages. Above you also find links to the previous master solutions now (that is, for the mid-term exam and for exercise sheet 9). If you find any mistakes in any of the master solutions, please let us know immediately, thanks. Also, if you have any questions / comments regarding the master solutions, don't hesitate to ask. Hannah 24Jan10 16:05

Ok, the file is now there, see the link and short description above. Have fun, and let us know if you are having any problems. NOTE: I said it in the lectures, but let me repeat it here, just in case, you must, of course, only use only the words from the title as features. The conference name in the first column is only so that you know the ground truth, which you need for the learning in Exercise 1, as well as for the quality assessment in Exercise 4. Hannah 24Jan10 15:48

I will do it right now, sorry, it was just procrastination from my side. Hannah 24Jan10 15:06

Hi, can you please upload the text-file with the publication records? Claudius 24 Jan 12:05

Hi Manuela + all: I understand your point. I think that when one is familiar with basic linear algebra, then all the exercises (including Exercise 2, given my fairly strong and concrete hints) are something which you just sit down and do, no deep thinking required. But when one is not familiar, then yes, I can see that most of the time will be spend on understanding the meaning of basic things (which, I agree, is very important) like why can one write something like u * v', where u and v are vectors, and obtain a matrix. I guess I am constantly underestimating the mathematical background and exercise you received in you first semesters here in Freiburg. Anyway, I will take this into account when computing the marks from your points for the exercise sheets 9, 10, 11, etc. Note that also for the first 8 exercise sheets you could get a 1.0 without getting all the points, even after taking the worst sheet out of the counting. We will have something similar for the second half, too. So don't worry, it will be fair, and please continue to make an effort with the exercises, and continue to give me feedback when an exercise consumed way too much time, for whatever reason. Hannah 21Jan 17:48

Maybe it's only a problem for me that I can't sit down and start to prove f.e. exercise 2 or 3 immediately. I'm not familiar with linear algebra and it's difficult to understand the meaning of what we do. So before I can start I have to search for information and have to read what matrix norms and Frobenius norms and so on is. That's why it took much time for me to do exercise 2 and 3. Proving the hints (at the bottom of this page) is also nothing what I can do in five minutes. And for exercise 1 it was my own fault that I need much more time for it. I was confused and made some silly stuff. Of course it would be nice to have the bonus points for the exam, but it will be hard (and time consuming) to solve all tasks of all exercise sheets without gaps. Thanks for the hints and I think that the new bonus point system is much better than the old one. The only thing is that I'm not sure, if the "time calculation" is better than before. Maybe I'm just too slow. Manuela

To Björn at all: Yes, I see, I think the solution to an exercise like Exercise 1 is much faster to write on paper and then scan it in. Typesetting lots of matrices etc. in Latex is no fun and takes lots of time and shouldn't really be part of an exercise. Hannah 21Jan10 14:32

Yes, your last hint was very helpful. Thanks a lot. Sorry for the late response but I had to work for other courses first and it took me like 3 hours to put the other solutions into Latex (maybe this is also one reason why this sheet takes lots of time again. Especially Ex1 is okay to solve using applets/programs + copy&paste for all intermediate steps, but writing everything down, still takes ages). Now that I looked at exercise 2 again, your hint really helped. Björn 21Jan 13:03

AD Teaching Wiki: SearchEnginesWS0910 (last edited 2010-02-03 18:15:38 by fiffy)