Size: 8429
Comment:
|
← Revision 69 as of 2018-04-20 01:26:49 ⇥
Size: 1222
Comment:
|
Deletions are marked like this. | Additions are marked like this. |
Line 1: | Line 1: |
#acl -All:read | This page provides details about the ESA 2018 Track B Experiment. ESA (European Symposium on Algorithms) is one of the premier conferences on algorithms. It has two tracks: Track A (Design and Analysis) and Track B (Engineering and Applications). The basic setup of the experiment is as follows: there will be two separate PCs for Track B, which will independently review all the submissions to Track B and independently decide on a set of papers to be accepted. After the PCs have done their work, the results will be compared both quantitatively (e.g., overlap in sets of accepted papers) and qualitatively (e.g., typical reasons for differences in the decisions of the two PCs). The results of this comparison will be published. Depending on the outcome, the set of accepted papers for Track B will either be the union of the set of accepted papers from the two independent PCs, or there will be a final round (outside of the experiment) discussing the submissions, where the two PCs reached different conclusions. |
Line 3: | Line 3: |
This page provides details about the ESA 2018 Track B Experiment. ESA (European Symposium on Algorithms) is one of the premier conferences on algorithms. It has two tracks: Track A (Design and Analysis) and Track B (Engineering and Applications). The basic setup of the experiments is as follows: there will be two separate PCs for Track B, which will independently review all the submissions to Track B and independently decide on a set of papers to be accepted. After the PCs have done their work, the results will be compared both quantitatively (e.g., overlap in sets of accepted papers) and qualitatively (e.g., typical reasons for differences in the decisions of the two PCs). The results of this comparison will be published. Depending on the outcome, the set of accepted papers for Track B will either be the union of the set of accepted papers from the two independent PCs, or there will be a final round (outside of the experiment) discussing the submissions, where the two PCs reached different conclusions. | [[http://algo2018.hiit.fi/esa|The official ESA 2018 website]] |
Line 5: | Line 5: |
<<BR>><<TableOfContents(2)>> | [[ESA2018Experiment/SelectionOfPCs|How the PCs were selected|]] |
Line 7: | Line 7: |
= Selection of the two PCs = Both PCs have 12 members. Both have the same PC chair. The complete list of members can be found here: http://algo2018.hiit.fi/esa/#pcb . The PCs have been set up so as to have an identical distribution with respect to topic, age group, gender, continent in the following sense. The topics are only a rough categorization of what the respective PC members are working on (many work on more than one topic, and topics are not that clear cut anyway). || Gender || 8 men, 4 women || || Age Group || 2 x junior (PhD <= 5 years ago), 4 x relatively junior (PhD <= 10 years ago), 6 x senior || || Continent || 8 x Europe, 4 x Americas (we tried Asia, but weren't successful, sorry for that) || || Topic || 1 x parallel algorithms (junior), 2 x string algorithms (one less senior, one more senior), 2 x computational geometry (one junior, one senior), 2 x operations research (one junior, one senior), 5 x algorithms in general (three junior, two senior) || = Reviewing "Algorithm" = The reviewing algorithm is essentially the same as in previous years. Because of the experiment and because it's a good idea anyway, we tried to specify it beforehand. However, this is not a 100% complete and precise specification of the process. The goal was to be as specific as possible with making the description overly complicated or impractical. We will fill in the gaps and fix problems in a reasonable way as we go along, taking care that we treat both PCs equally. As far as the experiment is concerned, these conditions are not perfect, but the appear OK and reasonable given the complexity of the process and the agents involved. == Duration and phases == The total time for the reviewing process (from the submission deadline to author notification) is 8 weeks. 1. The deadline for submissions is April 22 AoE (strict) 2. Bidding and paper assignment: 1 week (~ April 23 - April 29) 3. Reviewing: 4 weeks (~ April 30 - May 27) 4. Discussion + recalibration of reviews: 2 weeks (~ May 28 - June 10) 5. Buffer for things going wrong or taking longer than expected: 1 week 6. Notification deadline is June 18 (maybe earlier) == Text of a single review == Each review should provide the following information: 1. A short summary of the main contribution(s) of the submission in the words of the reviewer 1. An itemized list of the strength and weaknesses of the submission a. The strengths should be numbered (S1), (S2), ... a. The weaknesses should be numbered (W1), (W2), ... 1. More detailed explanations of the strenghts and weaknesses (if necessary or useful) 1. Comments to the authors for improving the paper (if applicable) == Score of a single review == Each review should provide one of the following scores. || '''Score''' || '''Verdict''' || '''Behavior during discussion''' || || +2 (accept) || No major weaknesses || I would champion this paper and fight against rejection || || +1 (weak accept) || Significant weaknesses, but nothing fatal || I would support this paper, but not fight against rejection || || 0 (borderline) || Hovering between +1 and −1 || Not sure yet about the severity of the weaknesses / the threshold for ESA || || −1 (weak reject) || Significant weaknesses, but nothing fatal || I am not supporting this paper, but would also not fight against acceptance || || −2 (reject) || Major weaknesses || I am opposing this paper and fight against acceptance || Remark 1: Some conference also have +3 (strong accept) and −3 (strong reject). Experience shows that they are of little use for deciding on the set of accepted papers for a moderate number of submissions, as in ESA Track B (around 50). Remark 2: Some conferences disallow the borderline score of 0, to enforce a clear opininion on the reviewer. In the discussion phase, we indeed ask revievers to commit to one of the other scores. But for the first review, we think it makes sense to allow this score, because it reflects one of the typical sentiments about a paper at this stage of the reviewing process, as expressed by the ''hovering between +1 and −1'' in the table above. Remark 2: Reviewers might not be fully aware yet of their behavior during the discussion phase for various reasons (for example: not sure about some aspects of the paper, not sure about the nature of the threshold for ESA Track B, general inexperience in reviewing). This can make choosing the right score difficult. But this is exactly one of the tasks of the discussion phase (described below): to bring the final scores (and reviews) closer to what they were supposed to reflect. == Discussion phase == The discussion phase is not easy to specify algorithmically. The basic procedure is clear, but there are many eventualities, most of which will not happen, but some of which will, and it's hard to say in advance which. We try our best anyway. As mentioned already in the disclaimer at the beginning of this ''Reviewing Algorithm'' section, we try to be as specificic as possible, without being overly complicated or impractical. === Groups of submissions === We distinguish between the following groups of submissions. Ecept for Group X, the descriptions in the second column assume that there are at least three reviews for each submission. The significance of these groups will become clear in the specification of the discussion rounds below: || Group A1 || clear support || || Group A2 || at least one champion + weak support from the others || || Group C1 || weak support + strong opposition || || Group C2 || strong support + weak opposition || || Group C3 || strong support + strong opposition (this is rare) || || Group R1 || no support || || Group R2 || weak support + no champion || || Group X || two of the reviews are missing or completely lack substance (hopefully this group will be empty) || Remark 1: The assignment of a submission to one of these groups will not be done by score alone, but also based on what is written in the reviews. Of course, there will be a strong correlation to the scores. In fact, if the scores were perfect, the correlation would be perfect. But it lies in the nature of the process that some reviewers (and PC members) are unsure about a submission or about the threshold for ESA. So one important part of the process of the discussion is that the scores actually reflect what they were originally intended to reflect. For example, a submission with scores {2, 2, 2} will probably be in Group A1 (unless the support expressed in the reviews is weaker than it might appear from the scores, in which case Group A2 might be more appropriate), and a submission with only negative scores will probably be in Group R1 (unless the reviews are more positive about the paper than it might appear from the scores, in which case Groups R2 or C1 might be more appropriate). Remark 2: Submissions can change groups at any time due to the ongoing discussions. Remark 3: Any assignment can always be disputed by any PC member at any time and submissions can also change groups because of that. For example, if another PC member opposes a submission from A2, that submission will go into C2 or C3. Remark 4: No decision is final until the end of the discussion phase. |
[[ESA2018Experiment/ReviewingAlgorithm|The reviewing "algorithm"|]] |
This page provides details about the ESA 2018 Track B Experiment. ESA (European Symposium on Algorithms) is one of the premier conferences on algorithms. It has two tracks: Track A (Design and Analysis) and Track B (Engineering and Applications). The basic setup of the experiment is as follows: there will be two separate PCs for Track B, which will independently review all the submissions to Track B and independently decide on a set of papers to be accepted. After the PCs have done their work, the results will be compared both quantitatively (e.g., overlap in sets of accepted papers) and qualitatively (e.g., typical reasons for differences in the decisions of the two PCs). The results of this comparison will be published. Depending on the outcome, the set of accepted papers for Track B will either be the union of the set of accepted papers from the two independent PCs, or there will be a final round (outside of the experiment) discussing the submissions, where the two PCs reached different conclusions.