2019 Repro Papers

This page lists the reproducibility papers that were accepted at the 2019 edition of ACM Multimedia, Nice, France.

 

  • Title: Using Mr. MAPP for Lower Limb Phantom Pain Management
  • Authors: Kanchan Bahirat, Yu-Yen Chung, Thiru Annaswamy, Gargi Raval, Kevin Desai, Balakrishnan Prabhakaran, Michael Riegler
  • Abstract: Phantom pain is a chronic pain that is experienced as a vivid sensation stemming from the missing limb. From traditional mirror box to virtual reality-based approaches, a wide spectrum of treatments using mimic feedback of the amputated limb have been developed for alleviating phantom limb pain. In our previous work, Mixed reality-based framework for MAnaging Phantom Pain (Mr.MAPP) was presented and used to generate a virtual phantom upper limb, in real time, to manage the phantom pain. However, amputation of the lower limb is more common than that of the upper limb. Hence, in this paper, on top of demonstrating the reproducibility of the Mr.MAPP framework for upper limb, we extend it to manage lower limb phantom pain as well. Unlike an upper limb amputee, a patient with lower limb amputated is constrained to perform the training procedure in a sitting posture. Accordingly, virtual training games are designed for lower limb exercises with sitting posture such as knee flexion and extension, ankle dorsiflexion and tandem coordinated movement. Finally, the technical details of the system setup for playing the training games are introduced.
  • DOI: 10.1145/3343031.3351165
  • Original ACM MM’17 Contribution: Mr.MAPP: Mixed Reality for MAnaging Phantom Pain
  • Result Replicated:

 

  • Title: Reproducible Experiments on Adaptive Discriminative Region Discovery for Scene Recognition
  • Authors: Zhengyu Zhao, Zhuoran Liu, Martha Larson, Ahmet Iscen, Naoko Nitta
  • Abstract: This companion paper supports the replication of scene image recognition experiments using Adaptive Discriminative Region Discovery (Adi-Red), an approach presented at ACM Multimedia 2018. We provide a set of artifacts that allow the replication of the experiments using a Python implementation. All the experiments are covered in a single shell script, which requires the installation of an environment, following our instructions, or using ReproZip.The data sets (images and labels) are automatically downloaded, and the train-test splits used in the experiments are created. The first experiment is from the original paper, and the second supports exploration of the resolution of the scale-specific input image, an interesting additional parameter. For both experiments, five other parameters can be adjusted: the threshold used to select the number of discriminative patches, the number of scales used, the type of patch selection (Adi-Red, dense or random), the architecture and pre-training data set of the pre-trained CNN feature extractor. The final output includes four tables (original Table 1, Table 2 and Table 4, and a table for the resolution experiment) and two plots (original Figure 3 and Figure 4).
  • DOI: 10.1145/3343031.3351169
  • Original ACM MM’18 Contribution: From Volcano to Toyshop: Adaptive Discriminative Region Discovery for Scene Recognition
  • Result Replicated:

 

  • Title: On Reproducing Semi-dense Depth Map Reconstruction using Deep Convolutional Neural Networks with Perceptual Loss
  • Authors: Ilya Makarov, Dmitrii Maslov, Olga Gerasimova, Vladimir Aliev, Alisa Korinevskaya, Ujjwal Sharma, Haoliang Wang
  • Abstract: In our recent papers, we proposed a new family of residual convolutional neural networks trained for semi-dense and sparse depth reconstruction without use of RGB channel. The proposed models can be used in low-resolution depth sensors or SLAM methods estimating partial depth with certain distributions. We proposed using perceptual loss for training depth reconstruction in order to better preserve edge structure and reduce over-smoothness of models trained on MSE loss alone. This paper contains reproducibility companion guide on training, running and evaluating suggested methods, while also presenting links on further studies in view of reviewers comments and related problems of depth reconstruction.
  • DOI: 10.1145/3343031.3351167
  • Original ACM MM’17 Contribution: Semi-Dense Depth Interpolation using Deep Convolutional Neural Networks
  • Result Replicated:

 

  • Title: Companion Paper for MiniView Layout for Bandwidth-Efficient 360-Degree Video
  • Authors: Mengbai Xiao, Shuoqian Wang, Chao Zhou, Li Liu, Zhenhua Li, Yao Liu, Songqing Chen, Lucile Sassatelli, Gwendal Simon
  • Abstract: This artifact includes source code, scripts and datasets required to reproduce the experimental figures in the evaluation of the MM’18 paper, which is entitled “MiniView Layout for Bandwidth-Efficient 360-Degree Video”. The artifact reports the comparison results among the standard cube layout (CUBE), the equi-angular layout (EAC), and the MiniView layout (MVL) in terms of compressed video size, visual quality of views and decoding and rendering time.
  • DOI: 10.1145/3343031.3351168
  • Original ACM MM’18 Contribution: MiniView Layout for Bandwidth-Efficient 360-Degree Video
  • Result Replicated:

Comments are closed.