9326
Comment:
|
14481
|
Deletions are marked like this. | Additions are marked like this. |
Line 3: | Line 3: |
Here are PDFs of the slides of the lectures so far: [[attachment:SearchEnginesWS0910/lecture-1.pdf|Lecture 1]], [[attachment:SearchEnginesWS0910/lecture-2.pdf|Lecture 2]]. | Here are PDFs of the slides of the lectures so far: [[attachment:SearchEnginesWS0910/lecture-1.pdf|Lecture 1]], [[attachment:SearchEnginesWS0910/lecture-2.pdf|Lecture 2]], [[attachment:SearchEnginesWS0910/lecture-3.pdf|Lecture 3]], [[attachment:SearchEnginesWS0910/lecture-4.pdf|Lecture 4]], [[attachment:SearchEnginesWS0910/lecture-5.pdf|Lecture 5]], [[attachment:SearchEnginesWS0910/lecture-6.pdf|Lecture 6]], [[attachment:SearchEnginesWS0910/lecture-7.pdf|Lecture 7]], [[attachment:SearchEnginesWS0910/lecture-8.pdf|Lecture 8]], [[attachment:SearchEnginesWS0910/lecture-9.pdf|Lecture 9]], [[attachment:SearchEnginesWS0910/lecture-10.pdf|Lecture 10]],[[attachment:SearchEnginesWS0910/lecture-11.pdf|Lecture 11]]. |
Line 5: | Line 5: |
Here are PDFs of the exercise sheets so far: [[attachment:SearchEnginesWS0910/exercise-1.pdf|Exercise Sheet 1]], [[attachment:SearchEnginesWS0910/exercise-2.pdf|Exercise Sheet 2]]. | |
Line 7: | Line 6: |
Here are your solutions and comments on the previous exercise sheets: [[SearchEnginesWS0910/ExerciseSheet1|Exercise Sheet 1]]. | Here are the recordings of the lectures so far (except Lecture 2, where we had problems with the microphone), LPD = Lecturnity recording: [[http://vulcano.informatik.uni-freiburg.de/lecturnity/lecture-1.lpd|Recording Lecture 1 (LPD)]], [[http://vulcano.informatik.uni-freiburg.de/lecturnity/lecture-3.lpd|Recording Lecture 3 (LPD)]], [[http://vulcano.informatik.uni-freiburg.de/lecturnity/lecture-4.lpd|Recording Lecture 4 (LPD)]], [[http://vulcano.informatik.uni-freiburg.de/lecturnity/lecture-5.lpd|Recording Lecture 5 (LPD without audio)]], [[http://vulcano.informatik.uni-freiburg.de/lecturnity/lecture-6.lpd|Recording Lecture 6 (LPD)]], [[http://vulcano.informatik.uni-freiburg.de/lecturnity/lecture-7.avi|Recording Lecture 7 (AVI)]], [[http://vulcano.informatik.uni-freiburg.de/lecturnity/lecture-8.avi|Recording Lecture 8 (AVI)]], [[http://vulcano.informatik.uni-freiburg.de/lecturnity/lecture-9.avi|Recording Lecture 9 (AVI)]], [[http://vulcano.informatik.uni-freiburg.de/lecturnity/lecture-10.avi|Recording Lecture 10 (AVI)]], [[http://vulcano.informatik.uni-freiburg.de/lecturnity/lecture-11.avi|Recording Lecture 11 (AVI)]]. Here are PDFs of the exercise sheets so far: [[attachment:SearchEnginesWS0910/exercise-1.pdf|Exercise Sheet 1]], [[attachment:SearchEnginesWS0910/exercise-2.pdf|Exercise Sheet 2]], [[attachment:SearchEnginesWS0910/exercise-3.pdf|Exercise Sheet 3]], [[attachment:SearchEnginesWS0910/exercise-4.pdf|Exercise Sheet 4]], [[attachment:SearchEnginesWS0910/exercise-5.pdf|Exercise Sheet 5]], [[attachment:SearchEnginesWS0910/exercise-6.pdf|Exercise Sheet 6]], [[attachment:SearchEnginesWS0910/exercise-7.pdf|Exercise Sheet 7]], [[attachment:SearchEnginesWS0910/exercise-8.pdf|Exercise Sheet 8]], [[attachment:SearchEnginesWS0910/exercise-9.pdf|Exercise Sheet 9]], [[attachment:SearchEnginesWS0910/exercise-10.pdf|Exercise Sheet 10]], [[attachment:SearchEnginesWS0910/exercise-11.pdf|Exercise Sheet 11]]. Here are your solutions and comments on the previous exercise sheets: [[SearchEnginesWS0910/ExerciseSheet1|Solutions and Comments 1]], [[SearchEnginesWS0910/ExerciseSheet2|Solutions and Comments 2]], [[SearchEnginesWS0910/ExerciseSheet3|Solutions and Comments 3]], [[SearchEnginesWS0910/ExerciseSheet4|Solutions and Comments 4]], [[SearchEnginesWS0910/ExerciseSheet5|Solutions and Comments 5]], [[SearchEnginesWS0910/ExerciseSheet6|Solutions and Comments 6]], [[SearchEnginesWS0910/ExerciseSheet7|Solutions and Comments 7]], [[SearchEnginesWS0910/ExerciseSheet8|Solutions and Comments 8]], [[SearchEnginesWS0910/ExerciseSheet9|Solutions and Comments 9]], [[SearchEnginesWS0910/ExerciseSheet10|Solutions and Comments 10]]. Here are our master solutions: [[attachment:SearchEnginesWS0910/solution-midterm.pdf|Master solution for Mid-Term Exam]],[[attachment:SearchEnginesWS0910/solution-9.pdf|Master solution for Exercise Sheet 9]], [[attachment:SearchEnginesWS0910/solution-10.pdf|Master solution for Exercise Sheet 10]]. The recordings of all lectures are now available, see above. Lecture 2 is missing because we had technical problems there. To play the Lecturnity recordings (.lpd files) you need the [[http://www.lecturnity.de/de/download/lecturnity-player|Lecturnity Player, which you can download here]]. I put the Camtasia recordings as .avi files, which you can play with any ordinary video player; I would recommend [[http://www.videolan.org/vlc|VLC]]. |
Line 11: | Line 18: |
= Exercise Sheet 2 = | [[SearchEnginesWS0910/MidTermExam|Here is everything about the mid-term exam]]. |
Line 13: | Line 20: |
Here are the details about the three servers (UDP, TCP, HTTP) for Exercise 4: | [[attachment:dblp.txt|Here is the file for the Exercise Sheet 11]]. It's a text file, where each line contains the name of the conference (in capital letters), followed by a TAB (ASCII code 9), followed by the title. There are three different conferences: STOC (2423 titles), SIGIR (2372 titles), and SIGGRAPH (1835 titles). The total number of titles / lines is 6630. The exact file size is 454365 bytes. |
Line 15: | Line 22: |
All three servers are running on our machine vulcano.informatik.uni-freiburg.de (IP address is 132.230.152.135). | [[SearchEnginesWS0910/ExerciseSheet11|Here is the table with the links to your uploaded solutions for Exercise Sheet 11]]. The deadline is Thursday 28Jan10 16:00. |
Line 17: | Line 24: |
The UDP server is running on port 8888 of that machine. You can send it a number and it will then send you back that number of bytes, in packets of 1000 bytes each. (That means you also have to read packets of 1000 bytes each.) The first ten bytes of each packet contain the packet id. That is interesting for checking which packets get lost and in which order packets arrive. | == Questions and comments about Exercise Sheet 11 below this line (most recent on top) == |
Line 19: | Line 26: |
The TCP server is running on port 9999 of that machine. You can send it a request of the form GET /<number of bytes> HTTP/1.1, or you can just use a downloading program like wget or curl and time it. | Hi, could you please explain again what data should be used for calculating the highest Pr(W = w|C = c) in excercies 2? The whole data, the remaining 90%, the learning 10% ? ''' Matthias 27Jan10 11:52 |
Line 21: | Line 28: |
The HTTP server is running on port 80, as web servers normally do. Just download the file http://vulcano.informatik.uni-freiburg.de/file_100M and measure the time. | A comment for all who haven't submitted their solutions yet (= most). To compute the argmax_c Pr(C = c) * Prod_w Pr(W = w|C = c), better compute the argmax_c of the logarithms, that is, argmax_c log(Pr(C = c)) + sum_w log Pr(W = W|C = c). The result will be the same, because log is a mononotone function. However, computing the sum of the logs is numerical more stable, while computing the product of many small probabilities can lead to numerical problems which can distort results. I don't think it's a big issue for the relatively small data set I gave you, but I would still do it. Anyway, it's not more work computing the sums of logs of things than the product of the things. '''Hannah 27Jan10 5:19am''' |
Line 23: | Line 30: |
For measuring your transfer and error rates, as requested by the exercise, repeat your experiments several times and also at different times, and form the average of these measurements (or report several numbers if you get very different results). You should ask for large amounts of data, like 10 MB or 100 MB. | Hi Alex + all: I didn't have a program so far, but have just written one, and my overall precision is 83.61%. So classification seems to work pretty well on this dataset. I didn't do anything fancy, used the +1 smoothing to avoid zero probabilities, and used every word. Note that the titles also contain commas, parantheses and stuff. I am saying this because I have seen that some people have words like "(extended" or "abstract)" or "title.". So please do pay attention to that and do not tokenize merely by whitepace. Also, whenever you write a program, test it!!! That is, have a small procedure that outputs the learned probabilities (or better, the counts), and then check them for a small example. I did that as well for my program, otherwise I would never be convinced that it does the correct thing. '''Hannah 27Jan10 1:10am''' |
Line 25: | Line 32: |
[[SearchEnginesWS0910/ExerciseSheet2|Here you can upload your solutions for Exercise Sheet 2]]. | Can you give an short hint how high the match rate should be. At my program the detection rate is around 1/3. I think this is a little bit low, but I also found no hint what rate is a good rate for the given set of documents. Edit: I found a bug now the detection rate is around 70%, but the question is still the same, is this a possible result? '''Alex 26Jan10 18:20''' |
Line 27: | Line 34: |
== Questions or comments below this line, most recent on top please == | Hi Claudius + all: to get the points, you only have to compute the w with the highest P(W=w|C=c), even if that is a word like "for" or "of". Would be nice and more interesting though, and not really more work, to compute those w with the k highest P(W=w|C=c). For some not too large k, some interesting words should crop up. You can also choose to ignore stopwords altogether as Marjan suggested. Here is a [[http://armandbrahaj.blog.al/2009/04/14/list-of-english-stop-words|list of English stopwords]]. '''Hannah 25Jan10 18:25''' |
Line 29: | Line 36: |
Dear all, the TCP is currently crashing whenever the client aborts, and then it's down before we restart it. Marjan is working on solving this problem, and we will tell you as soon as it's done. The UDP server does not have this problem. '''Hannah 1Nov09 2:33''' | Hi Claudius. My recommendation is to ignore stop-words (e.g. the, a, of, is, are etc., for reasons already explained in the lecture) but please wait for a reply from Hannah to be sure. '''Marjan 25Jan10 14:30''' |
Line 31: | Line 38: |
Hi Zhongjie + all, as it says above "The first ten bytes of each packet contain the packet id ...". (But it only does that if a packet is larger than 10 bytes.) For example, if you ask for 10000 bytes, the server will send you 10 packets with 1000 bytes each, with ids from 0 to 9. This is interesting information, because your client can use it to print the packet id of each package it receives and see how many packets arrive out of order. You don't have to do this for the exercise, but it's interesting and easy to do. And as you will see then, out of order arrival indeed happens. '''Hannah 1Nov09 2:07pm''' Well, problem solved... You need to send a package end with a '\0' char to server, otherwise server will not respond... But here is another problem: when I send a UDP package like "5\0" to server, I will receive reply package like "xxxxx". If I send "10\0", the reply is "xxxxxxxxxx". And if it is "20\0" I send, it is "0000000000\0xxxxxxxxx" I receive. Confused... '''Zhongjie 1Nov09 11:20am''' |
Hi. In Exercise 2, we have to identify the most predictive word for each conference. But, when I take the heighest Pr(W=w|C=c), I get not very predictive words like "for" and "of". Is this sufficient, or should we make an effort, to find words, which are more predictive? '''Claudius 25 Jan 14:26''' |
Line 35: | Line 40: |
I had the same problem. But try to send your query with two linefeeds at the end, like this: send_data = '50\n\n' This makes the UDP server a lot more responsive... '''Christian 1Nov09 10:26am''' |
Yes, very good question (the second one), I had it on my agenda for the lecture, but somehow forgot to tell you about it. There is a very simple and effective solution to that problem, which you should also use in the exercise. On slide #10, I told you to take Pr(W = w | C = c) = n_wc / sum_w n_wc, where n_wc is the total number of occurrences of word w in class c. Well, just take Pr(W = w | C = c) = (n_wc + 1) / sum_w (n_wc + 1), which can never be zero. Intuitively, this is like saying that every word occurs at least once for each class. Which is also reasonable, because if your amount of data is big enough, that will indeed happen. It's just an artefact of small data that some words don't occur at all for certain classes. Please ask again in case that was not crystal clear. '''Hannah 24Jan10 21:49''' |
Line 38: | Line 42: |
hello Marjan, Now its work, thanks. what do you mean about skipping one ex.sheet without loosing any points? I wonder whether my exercise uploaded on 26 of Oct is still counted? '''Triatmoko 1Nov09 10:19''' | To Florian + all: Of course you should use the Bayes formula to predict the most probable conference (class). The second question is a good one. I think the natural way is to take that probability as zero. Another way (actually the opposite) is to ignore the words that have not appeared in the original training set i.e. assume that they're not relevant for the prediction. '''Marjan 24Jan10 21:32''' |
Line 41: | Line 44: |
I have a question to Exercise 2: I do not quite understand how we should predict the conferences for the remaining records. Should we just decide by looking at the most predictive word and decide with that or should we use the Naive Bayes formula of the slides ( argmax_c Pr(C = c) · Π_i=1,...,m Pr(W_i = w_i | C = c) ). And using the Bayes fomula, how should we handle occuring words that did not occur in the training data? Using zero for their probability makes the whole probability for the conference zero as well which is not very reasonable. '''Florian 24Jan10 21:20''' | |
Line 43: | Line 46: |
Hello! I still could not get any response from both the UDP and the TCP server port by now, but HTTP server works fine. If anyone could get some result, please tell me that you can get response from servers, so that I will know it's my own problem... Thank you! '''Zhongjie 1Nov09 08:43''' | I have also uploaded the master solutions for exercise sheet 10 now, see the link above. Note that it's just two pages. Above you also find links to the previous master solutions now (that is, for the mid-term exam and for exercise sheet 9). If you find any mistakes in any of the master solutions, please let us know immediately, thanks. Also, if you have any questions / comments regarding the master solutions, don't hesitate to ask. '''Hannah 24Jan10 16:05''' |
Line 45: | Line 48: |
Hi, I need some clarification on this term "HTTP result header" in Excercise 2 in sheet 2. Will HTTP header contains generic http information or something related to Results? Offcourse our Result will in HTML form. '''Waleed 1Nov09 5:52AM''' | Ok, the file is now there, see the link and short description above. Have fun, and let us know if you are having any problems. '''NOTE:''' I said it in the lectures, but let me repeat it here, just in case, you must, of course, only use ''only the words from the title as features''. The conference name in the first column is only so that you know the ground truth, which you need for the learning in Exercise 1, as well as for the quality assessment in Exercise 4. '''Hannah 24Jan10 15:48''' |
Line 47: | Line 50: |
I am sorry for the downtime, these maintenance works were announced already several weeks ago, but then I forgot about them because they were scheduled on a Saturday which I thought would not affect me. The downtime also killed our servers, but now they are running again. About the corrections: of course you should get comments on what you did wrong and why you got less points for what. Sorry, if that didn't happen for the first exercise sheet. I will talk with Marjan. '''Hannah 1Nov09 00:38am''' | I will do it right now, sorry, it was just procrastination from my side. '''Hannah 24Jan10 15:06''' |
Line 49: | Line 52: |
Yes, Eric. Today afternoon and evening was nearly the whole Uni-Net Offline because of intended maintenance. I have a question: Will there be any correction or comments or a sample solution or something like that for every exercise because I now only know how many points I have for every Exercise in Exercise Sheet 1 but I don't know why I have a lack of 1/2 point in one exercise and in another. I think it would be good if anybody will know what he has done wrong or/and what he could have done better. '''Waldemar 31Oct09 21:08''' | Hi, can you please upload the text-file with the publication records? '''Claudius 24 Jan 12:05''' |
Line 51: | Line 54: |
I cannot connect to vulcano.informatik.uni-freiburg.de:9999 and :8888, neither from outside, nor from inside (logged into pool account). Also the whole informatik.uni... was down a few minutes ago. It's hard to solve exercise 4 without the servers running. '''Eric 31Oct09 19:00''' | Hi Manuela + all: I understand your point. I think that when one is familiar with basic linear algebra, then all the exercises (including Exercise 2, given my fairly strong and concrete hints) are something which you just sit down and do, no deep thinking required. But when one is not familiar, then yes, I can see that most of the time will be spend on understanding the meaning of basic things (which, I agree, is very important) like why can one write something like u * v', where u and v are vectors, and obtain a matrix. I guess I am constantly underestimating the mathematical background and exercise you received in you first semesters here in Freiburg. Anyway, I will take this into account when computing the marks from your points for the exercise sheets 9, 10, 11, etc. Note that also for the first 8 exercise sheets you could get a 1.0 without getting all the points, even after taking the worst sheet out of the counting. We will have something similar for the second half, too. So don't worry, it will be fair, and please continue to make an effort with the exercises, and continue to give me feedback when an exercise consumed way too much time, for whatever reason. '''Hannah 21Jan 17:48''' |
Line 53: | Line 56: |
Hi, I wonder whether those three servers for exercise 4 of sheet 2 online or not? '''Zhongjie 31Oct09 11:49''' | Maybe it's only a problem for me that I can't sit down and start to prove f.e. exercise 2 or 3 immediately. I'm not familiar with linear algebra and it's difficult to understand the meaning of what we do. So before I can start I have to search for information and have to read what matrix norms and Frobenius norms and so on is. That's why it took much time for me to do exercise 2 and 3. Proving the hints (at the bottom of this page) is also nothing what I can do in five minutes. And for exercise 1 it was my own fault that I need much more time for it. I was confused and made some silly stuff. Of course it would be nice to have the bonus points for the exam, but it will be hard (and time consuming) to solve all tasks of all exercise sheets without gaps. Thanks for the hints and I think that the new bonus point system is much better than the old one. The only thing is that I'm not sure, if the "time calculation" is better than before. Maybe I'm just too slow. '''Manuela''' |
Line 55: | Line 58: |
To all: Please note that (almost) everybody will get +2 points for Exercise Sheet 1 (previously I did not assign any points to the first and the last problem). '''Marjan 31Oct09 10:03''' | To Björn at all: Yes, I see, I think the solution to an exercise like Exercise 1 is much faster to write on paper and then scan it in. Typesetting lots of matrices etc. in Latex is no fun and takes lots of time and shouldn't really be part of an exercise. '''Hannah 21Jan10 14:32''' |
Line 57: | Line 60: |
To Triatmoko and Ahmed + all: You obviously haven't created your wiki page. Please go to your link and click "Create a new page" and then "Save changes". You should then upload your solutions there and put the link on the wiki like everybody else did. If it is still not clear please ask some of your fellow students. I don't remember if it is mentioned, but you can skip one Exercise Sheet without losing any points. '''Marjan 31Oct09 09:38''' Hi, I have some question about my exercise Sheet 1, I saw in my exercise page that my name, my upload solution and code in gray color and other persons in blue color. I try to click my attachment file in my exercise page, and I have some message that there are no attachment. any body know about this issue?because my exercise uploaded on Monday 26oct '''Triatmoko 30Oct09 22:18''' Hi Björn + all: For Exercise 4 from Exercise Sheet 1 you had to write code that is at least able to process 2-word queries. If your code can indeed only handle 2-word queries and not an arbitrary number of query words, that is also fine for this exercise, you won't get less points because of that. Your second question is also very valid. You should put the various functionalities into modules / classes of their own, so that you can easily combine them for the three different binaries required for Exercises 1 - 3. Each of your three programs will then be quite short, just putting together the right things. I should have added it to my list of evil coding NoNos: never ever duplicate code, but instead put it in a class / module of its own. I hope this answers your questions, if not please ask again. Sorry for the late answer, but I was super busy until now, hardly had time to breathe. '''Hannah 30Oct09 19:08''' I have a question concerning exercise 2. There was no concrete task to produce "query processing code" on ex sheet 1. Are there any requirements that have to be fulfilled? Should it be able to handle two word queries? n-word queries? Additionally there is something else I want to ask: I think it surely isn't bad practice to write a more generic webserver and use it for exercises 1-3. Apart from that it says "change your code" some times in the exercises. How should your submission behave w.r.t. the exercises? Different src files / executables for each exercise? One program that solve each exercises depending on startup parameters? Anything else? '''Björn 30ct09 2:25pm''' I now reorganized the page. Old stuff went to separate pages (links above). The idea is that the front page is always for the current lecture / exercises. The problem with your exercise page should be solved now, Ivo. '''Hannah 30ct09 00:05am''' Having problems to access my exercise page after loging in - IvoChichkovExercises, '''Ivo 29Oct 22:56pm''' Sorry to bother you. I added the Link to exercise sheet 2 with the linked pdf. Needed this to find the sheet as fast as possible. '''Marius 29Oct 10:04 p.m.''' Is there a webpage for exercise sheet 2 somewhere? '''Johannes 29Oct 07:45 pm''' |
Yes, your last hint was very helpful. Thanks a lot. Sorry for the late response but I had to work for other courses first and it took me like 3 hours to put the other solutions into Latex (maybe this is also one reason why this sheet takes lots of time again. Especially Ex1 is okay to solve using applets/programs + copy&paste for all intermediate steps, but writing everything down, still takes ages). Now that I looked at exercise 2 again, your hint really helped. '''Björn 21Jan 13:03''' |
Welcome to the Wiki page of the course Search Engines, WS 2009 / 2010. Lecturer: Hannah Bast. Tutorials: Marjan Celikik. Course web page: click here.
Here are PDFs of the slides of the lectures so far: Lecture 1, Lecture 2, Lecture 3, Lecture 4, Lecture 5, Lecture 6, Lecture 7, Lecture 8, Lecture 9, Lecture 10,Lecture 11.
Here are the recordings of the lectures so far (except Lecture 2, where we had problems with the microphone), LPD = Lecturnity recording: Recording Lecture 1 (LPD), Recording Lecture 3 (LPD), Recording Lecture 4 (LPD), Recording Lecture 5 (LPD without audio), Recording Lecture 6 (LPD), Recording Lecture 7 (AVI), Recording Lecture 8 (AVI), Recording Lecture 9 (AVI), Recording Lecture 10 (AVI), Recording Lecture 11 (AVI).
Here are PDFs of the exercise sheets so far: Exercise Sheet 1, Exercise Sheet 2, Exercise Sheet 3, Exercise Sheet 4, Exercise Sheet 5, Exercise Sheet 6, Exercise Sheet 7, Exercise Sheet 8, Exercise Sheet 9, Exercise Sheet 10, Exercise Sheet 11.
Here are your solutions and comments on the previous exercise sheets: Solutions and Comments 1, Solutions and Comments 2, Solutions and Comments 3, Solutions and Comments 4, Solutions and Comments 5, Solutions and Comments 6, Solutions and Comments 7, Solutions and Comments 8, Solutions and Comments 9, Solutions and Comments 10.
Here are our master solutions: Master solution for Mid-Term Exam,Master solution for Exercise Sheet 9, Master solution for Exercise Sheet 10.
The recordings of all lectures are now available, see above. Lecture 2 is missing because we had technical problems there. To play the Lecturnity recordings (.lpd files) you need the Lecturnity Player, which you can download here. I put the Camtasia recordings as .avi files, which you can play with any ordinary video player; I would recommend VLC.
Here are the rules for the exercises as explained in Lecture 2.
Here is everything about the mid-term exam.
Here is the file for the Exercise Sheet 11. It's a text file, where each line contains the name of the conference (in capital letters), followed by a TAB (ASCII code 9), followed by the title. There are three different conferences: STOC (2423 titles), SIGIR (2372 titles), and SIGGRAPH (1835 titles). The total number of titles / lines is 6630. The exact file size is 454365 bytes.
Here is the table with the links to your uploaded solutions for Exercise Sheet 11. The deadline is Thursday 28Jan10 16:00.
Questions and comments about Exercise Sheet 11 below this line (most recent on top)
Hi, could you please explain again what data should be used for calculating the highest Pr(W = w|C = c) in excercies 2? The whole data, the remaining 90%, the learning 10% ? Matthias 27Jan10 11:52 A comment for all who haven't submitted their solutions yet (= most). To compute the argmax_c Pr(C = c) * Prod_w Pr(W = w|C = c), better compute the argmax_c of the logarithms, that is, argmax_c log(Pr(C = c)) + sum_w log Pr(W = W|C = c). The result will be the same, because log is a mononotone function. However, computing the sum of the logs is numerical more stable, while computing the product of many small probabilities can lead to numerical problems which can distort results. I don't think it's a big issue for the relatively small data set I gave you, but I would still do it. Anyway, it's not more work computing the sums of logs of things than the product of the things. Hi Alex + all: I didn't have a program so far, but have just written one, and my overall precision is 83.61%. So classification seems to work pretty well on this dataset. I didn't do anything fancy, used the +1 smoothing to avoid zero probabilities, and used every word. Note that the titles also contain commas, parantheses and stuff. I am saying this because I have seen that some people have words like "(extended" or "abstract)" or "title.". So please do pay attention to that and do not tokenize merely by whitepace. Also, whenever you write a program, test it!!! That is, have a small procedure that outputs the learned probabilities (or better, the counts), and then check them for a small example. I did that as well for my program, otherwise I would never be convinced that it does the correct thing. Can you give an short hint how high the match rate should be. At my program the detection rate is around 1/3. I think this is a little bit low, but I also found no hint what rate is a good rate for the given set of documents. Edit: I found a bug now the detection rate is around 70%, but the question is still the same, is this a possible result? Hi Claudius + all: to get the points, you only have to compute the w with the highest P(W=w|C=c), even if that is a word like "for" or "of". Would be nice and more interesting though, and not really more work, to compute those w with the k highest P(W=w|C=c). For some not too large k, some interesting words should crop up. You can also choose to ignore stopwords altogether as Marjan suggested. Here is a list of English stopwords. Hi Claudius. My recommendation is to ignore stop-words (e.g. the, a, of, is, are etc., for reasons already explained in the lecture) but please wait for a reply from Hannah to be sure. Hi. In Exercise 2, we have to identify the most predictive word for each conference. But, when I take the heighest Pr(W=w|C=c), I get not very predictive words like "for" and "of". Is this sufficient, or should we make an effort, to find words, which are more predictive? Yes, very good question (the second one), I had it on my agenda for the lecture, but somehow forgot to tell you about it. There is a very simple and effective solution to that problem, which you should also use in the exercise. On slide #10, I told you to take Pr(W = w | C = c) = n_wc / sum_w n_wc, where n_wc is the total number of occurrences of word w in class c. Well, just take Pr(W = w | C = c) = (n_wc + 1) / sum_w (n_wc + 1), which can never be zero. Intuitively, this is like saying that every word occurs at least once for each class. Which is also reasonable, because if your amount of data is big enough, that will indeed happen. It's just an artefact of small data that some words don't occur at all for certain classes. Please ask again in case that was not crystal clear. To Florian + all: Of course you should use the Bayes formula to predict the most probable conference (class). The second question is a good one. I think the natural way is to take that probability as zero. Another way (actually the opposite) is to ignore the words that have not appeared in the original training set i.e. assume that they're not relevant for the prediction. I have a question to Exercise 2: I do not quite understand how we should predict the conferences for the remaining records. Should we just decide by looking at the most predictive word and decide with that or should we use the Naive Bayes formula of the slides ( argmax_c Pr(C = c) · Π_i=1,...,m Pr(W_i = w_i | C = c) ). And using the Bayes fomula, how should we handle occuring words that did not occur in the training data? Using zero for their probability makes the whole probability for the conference zero as well which is not very reasonable. I have also uploaded the master solutions for exercise sheet 10 now, see the link above. Note that it's just two pages. Above you also find links to the previous master solutions now (that is, for the mid-term exam and for exercise sheet 9). If you find any mistakes in any of the master solutions, please let us know immediately, thanks. Also, if you have any questions / comments regarding the master solutions, don't hesitate to ask. Ok, the file is now there, see the link and short description above. Have fun, and let us know if you are having any problems. I will do it right now, sorry, it was just procrastination from my side. Hi, can you please upload the text-file with the publication records? Hi Manuela + all: I understand your point. I think that when one is familiar with basic linear algebra, then all the exercises (including Exercise 2, given my fairly strong and concrete hints) are something which you just sit down and do, no deep thinking required. But when one is not familiar, then yes, I can see that most of the time will be spend on understanding the meaning of basic things (which, I agree, is very important) like why can one write something like u * v', where u and v are vectors, and obtain a matrix. I guess I am constantly underestimating the mathematical background and exercise you received in you first semesters here in Freiburg. Anyway, I will take this into account when computing the marks from your points for the exercise sheets 9, 10, 11, etc. Note that also for the first 8 exercise sheets you could get a 1.0 without getting all the points, even after taking the worst sheet out of the counting. We will have something similar for the second half, too. So don't worry, it will be fair, and please continue to make an effort with the exercises, and continue to give me feedback when an exercise consumed way too much time, for whatever reason. Maybe it's only a problem for me that I can't sit down and start to prove f.e. exercise 2 or 3 immediately. I'm not familiar with linear algebra and it's difficult to understand the meaning of what we do. So before I can start I have to search for information and have to read what matrix norms and Frobenius norms and so on is. That's why it took much time for me to do exercise 2 and 3. Proving the hints (at the bottom of this page) is also nothing what I can do in five minutes. And for exercise 1 it was my own fault that I need much more time for it. I was confused and made some silly stuff. Of course it would be nice to have the bonus points for the exam, but it will be hard (and time consuming) to solve all tasks of all exercise sheets without gaps. Thanks for the hints and I think that the new bonus point system is much better than the old one. The only thing is that I'm not sure, if the "time calculation" is better than before. Maybe I'm just too slow. To Björn at all: Yes, I see, I think the solution to an exercise like Exercise 1 is much faster to write on paper and then scan it in. Typesetting lots of matrices etc. in Latex is no fun and takes lots of time and shouldn't really be part of an exercise. Yes, your last hint was very helpful. Thanks a lot. Sorry for the late response but I had to work for other courses first and it took me like 3 hours to put the other solutions into Latex (maybe this is also one reason why this sheet takes lots of time again. Especially Ex1 is okay to solve using applets/programs + copy&paste for all intermediate steps, but writing everything down, still takes ages). Now that I looked at exercise 2 again, your hint really helped.