Will reproducibility cost me a lot of extra time?
If you ask any experienced researcher in academia or in industry, they will tell you that they in fact already follow the reproducibility principles on a daily basis: Not as an afterthought, but as a way of doing good research.
Maintaining easily redoable experiments simply makes working on hard problems much easier by making it possible for you to repeat your analysis for different data sets, different hardware, different parameters.
Like other leading system designers, you will save significant amounts of time because you will minimize the set up and tuning effort for your experiments.
In addition, such practices will help bring new team members up to speed after a project has lain dormant for a few months.
Ideally reproducibility should be close to zero effort.
How is a reproducibility paper related to the open source software competition?
The open source software competition (OSSC) and the ACM Multimedia Reproducibility initiative share goals, process and methods: clear specification, flexibility in the code, parameters, datasets, but their nature differs.
Making a paper replicable, and therefore exposing code etc, does not mean that this piece of work can later be reused elsewhere for some other purpose. It is solely done for replicating the results of that paper. In contrast, open source material typically comes with some form of licence facilitating its integration elsewhere.
Replicability might involve non open third party material (i.e. you need to have this Matlab module in order to run this linear algebra thing to replicate my results) whereas OSSC is trying hard to avoid this. OSSC is typically something having a general interest for the community, a full software, a nice library, whereas replicability is about the artifacts needed for a specific paper. In addition, replicability includes (re-)creating graphs for supporting the claims, whereas open source material might not be concerned with this, but rather with providing you with a lightweight, fast, correct implementation of this or that.
With the OSSC, the emphasis is on providing implementations of important algorithms that will be widely reused by the community. The results go beyond being artifacts to being production-level products that are used by other researchers.
It is possible that something which started as a nice replicability paper can evolve into submission in later years for the OSSC.
GitHub: Does a GitHub link in my scientific contribution and that reproducibility stuff conflict?
No! Scientific papers might still have GitHub links when they are first submitted. We don’t want to de-incentivize people to release their code with the main paper. Although that code may be reused by others, the quality of what is released is typically much lower than what is at stake for reproducibility initiative at ACM MM.
Note that often, people downloading your GitHub material bombard you with e-mails about how to run, how to vary parameters, etc.
Your answers typically form reproducibility material. If you start with GitHub, then you are in tune with this reproducibility story! One more step and you’ll be through the looking glass. It is worth trying!
Dispute: Can I dispute the reproducibility results?
You will not have to! If any problems appear during the reviewing phase, the committee will contact you directly, so we can work with you to find the best way to evaluate your work.
What happens if it takes more than 6 months for my work to pass?
It might happen because we all know that the Devil hides in the details. Evaluating your work may cause very high work for the committee and many interactions with you due to unexpected complications. Unless your paper does not pass the reviewing process (see below), your paper will eventually receive ACM badge(s), it will be included in the proceedings of the ACM MM conference of year N+2 and you get the chance to present your work in the poster session.
Rejects: What happens if my work does not pass?
Although we expect that we can help all papers pass the reproducibility reviewing process, in the rare event that a paper does not go through the process successfully, this information will not be made public in any way.
So there is no downside in submitting your work!
Failing to pass might for example be caused by a really poor preparation and packaging of whatever is needed for replicability, turning the reviewing task into a nightmare.
Reproducibility papers that can not be successfully reviewed within a year from their submission date are rejected and do not pass. Reviewers are here to check the soundness of what you prepared and to interact with you. If you target the Results Replicated badge, the interactions will result in you fixing (small) issues; cleaning, testing, deploying, packaging, commenting are all in your hands. Getting close to unzip-and-run material is the target.