Goals for ACM Multimedia Reproducibility Reviews
Please firstly consult the general reviewing guidelines for ACM Multimedia 2020.
Please also refer to the ACM Multimedia Reproducibility pages for the information about what an ideal repro submission should have.
- Interacting with the authors is the most important thing. As soon as you are blocked, if anything fails during the install, the execution, if any result appears to be strange, then in parallel to trying to figure out what is going on, please talk to the authors. You will lose less time. Maybe the overall duration of the review process will increase, but this does not matter.
- The assumption is that (a) the original ACM Multimedia paper contains valid science and (b) that the corresponding repro paper indeed supports the findings and can be trusted. The goal of all this repro track is to work such that repro paper *eventually* gets accepted once what they propose is clear and clean enough for other members of the community to build on their findings. It may take time, it may take many iterations and discussions with the authors, but the goal is to have them pass the reviewing phase after updates and adjustments. Rejects should really be limited to the cases where something goes wrong: it requires tremendous work to reproduce the findings; where there is something is very unclear or incorrect; or where authors do not discuss with you in an open and friendly way, as if they were trying to hide something from your sight.
- Take special care if the system you receive is a black box. We must try to make as much as possible sure that the binaries received are indeed truly running what is described in the ACM Multimedia paper. We must make sure that authors have not created some piece of software that ingests a few parameters that users can vary and then their code simply prints out the results that are expected, without running anything in between. We must work such that we are eventually convinced that we can trust the system provided by the authors. It is easy to cook any result, to tweak it such that it is realistic, such that it appears consistent with whatever adjustment in the parameters, etc. There is no way we can detect all impostors. By default, however, I think there is a very high level of trust between us.
- The goal is to try to complete the reviews for the regular review deadline, to be able to give notifications at the regular notification deadline. If reviewing a repro paper turns out to be particularly complicated, then it is OK to miss that deadline. It simply means that this repro paper will not be included into the proceedings of ACM MM 2020. Its reviewing will continue until a final decision is reached, with an aim for publishing the repro paper in ACM MM 2021. If the review can not be finalized before the notification deadline for ACM MM 2021, however, then the paper is rejected.
Reproducibility Review Process for ACM MM 2020
The reproducibility process should run as follows:
- The chairs of the committee will team you with another reviewer who accepted that duty, as there must be at least two reviewers per reproducible paper. It’s a team, and close interactions between reviewers must be the standard. Quickly tell the chairs if any cooperation difficulty gets in your way.
- The chairs of the committee will put you in touch with the authors of the paper. If necessary, initiate a discussion with the authors to ensure you are at the same page regarding specific details in the experiments that may not be clear.
- After you receive access to code, datasets, and scripts, quickly verify that the submission is complete and has enough documentation before proceeding further.
- When an issue appears, immediately contact the authors to help resolve it. The authors’ collaboration is both welcome and expected because they know and understand the specifics of their algorithms and code. In most cases, issues have to do with automating a script (e.g., making it path independent) or dealing with dependencies.
- The goal is both to go through the process of reproducing the results and to provide enough feedback to the authors so that they can create an easily shareable instance of their code and experiments.
- When running experiments eliminate any interference in your systems, i.e., make sure, unless otherwise stated by the authors, that the machines used are dedicated to the reproducibility experiments.
- If the results you gathered during the reproducibility process do not support the claims, contact the authors to address this. This may be due to issues that have to do with the set-up of the experiments or the hardware specifications.
- The reviewing team writes a short report (1/2 to 1 page) to document the results and whether they were able to reproduce the results. If you can reproduce only part of the results please indicate that in your document. Also, make a note of any specific details regarding the set-up or analysis that are important. Communicate this report to the authors to make sure they think this is a fair result and take into account any suggestions.
- That report is merged with the reproducible paper that was reviewed, the two reviewers become co-authors on the resulting companion paper put inside the ACM DL in connection with the original contribution, to which the badge is attached.
- If a paper cannot be reproduced, then contact the chairs. We will then dig into the problems.