We have the pleasure to announce the keynote speakers in our workshop: Prof. Tat-Seng Chua, Prof. Rita Cucchiara and Prof. Pablo César.
Prof. Tat-Seng Chua is the KITHCT Chair Professor at the School of Computing, National University of Singapore. He was the Acting and Founding Dean of the School from 1998-2000. His main research interests fall under the topics of multimedia information retrieval, unstructured multimodal analytics, and the emerging applications in Chabot, wellness and Fintech. He is the co-Director of NExT, a joint Center between NUS and Tsinghua University on Extreme Search.
Dr Chua is the recipient of the 2015 ACM SIGMM Achievements Award. He is the Chair of steering committee of ACM International Conference on Multimedia Retrieval (ICMR) and Multimedia Modeling (MMM) conference series. Dr Chua is also the General Co-Chair of ACM Multimedia 2005, ACM CIVR (now ACM ICMR) 2005, ACM SIGIR 2008, and ACM Web Science 2015. He serves in the editorial boards of several international journals. Dr. Chua is the co-Founder of two technology startup companies in Singapore. He holds a PhD from the University of Leeds, UK.
Title: From Affective Analysis to Actionable Attributes.
Abstract: Images can convey rich semantics and often induce various kinds of emotions and desires to viewers. As people deal more naturally with images and videos, especially the younger generation who often use image as a medium for communications, images have been increasingly used for various purposes, including advertising, campaigning, user interface, or simply as messages. Although many years of research have been devoted into understanding the content of images, the works mostly cover tangible or perceptible content with well-defined meanings; and little work has been done on the subjective aspects of images such as the emotion, popularity, interestingness or actionability. What does it mean for an image to convey positive emotion, or being interesting, stylistic or actionable; and what features help to define such quality? In fact, other than emotion analysis, very little work has been done on other aspects of subjective quality. This talk reviews the current research on understanding emotion, interestingness and popularity of images, and extends these works to model the quality of being actionable. It covers various data driven approaches, as well as the knowledge/ model driven and hand-crafted approaches. The area is new and evolving, and this talk will evoke more questions than answers.
Prof. Alberto del Bimbo is Professor of Computer Engineering at the Department of Information Engineering, University of Firenze and Director of MICC Media Integration and Communication Center, University of Firenze. Prof. Alberto del Bimbo leads a research team at MICC, Media Integration and Communication Center of University of Firenze, investigating cutting-edge solutions in the fields of multimedia, multimodal interactivity and computer vision. The main applications fields in his research include digital libraries and cultural heritage, user enhanced and personalized interactivity, smart environments, surveillance and monitoring, and industry automation.
Title: Incremental Identity Learning from Video Streams.
Abstract: Face recognition systems have become increasing popular thanks to the improved performance of classification models and computational power. In the real world, the widespread diffusion of cameras and the availability of computing power has expanded the realm of face recognition applications. Open world scenarios are common, where face recognition has to deal with a large number of unseen subjects to learn. When novel subjects are presented to the system, they must be incorporated into the learning process, so requiring the system to be able to discriminate between already known and novel subjects. In such contexts data are not independent and identically distributed and the current state of the art of deep learning doesn’t allow unsupervised parameter re-learning to incorporate new information with no catastrophic interference. We will discuss our research on unsupervised on-line incremental learning of face appearance from video streams that provides effective answers to the requirements of open world face recognition. In our approach, for each subject observed, we collect deep face descriptors in consecutive frames. Since face images of the same subject in a sequence have little differences from each other, we distill the most distinctive descriptors in order to incrementally learn a sufficiently complete and non-redundantrepresentation of the identity of each subject, and control memory overflow. We show that our learning procedure is asymptotically stable.
Prof. Pablo César leads the Distributed and Interactive Systems group at CWI (The National Research Institute for Mathematics and Computer Science in the Netherlands) and is Associate Professor at TU Delft. Pablo’s research focuses on modelling and controlling complex collections of media objects (including real-time media and sensor data) that are distributed in time and space. His fundamental interest is in understanding how different customisations of such collections affect the user experience. Pablo is the PI of a Public Private Partnership project with Xinhuanet, and very successful EU-funded projects like 2-IMMERSE and VRTogether. He has (co)-authored over 100 articles. He is member of the editorial board of, among others, IEEE Transactions on Multimedia, IEEE Multimedia and ACM Transactions on Multimedia (TOMM). Pablo has given tutorials about multimedia systems in prestigious conferences such as ACM Multimedia, CHI, and the WWW conference. He acted as an invited expert at the European Commission’s Future Media Internet Architecture Think Tank and participates in standardisation activities at MPEG (point-cloud compression) and ITU (QoE for multi-party tele-meetings).
Title: Sensing the Audience: Connecting Fashion, Senses, and Spaces.
Abstract: We live in a society based on experiences; yet, it is surprising to see how little it is actually known about how people actually value these experiences. The high-end technical solutions for shaping experiences sharply contrast with the rather conventional mechanisms used to measure them. This talk will overview our efforts on gathering data and understanding the experience of people attending cultural events, by using wearable sensor technology. Through practical case studies in different areas of the creative industries from theatre going to clubbing, we will showcase our results and discuss about our failures. Based on realistic testing grounds, collaborating with several commercial and academic partners, we have deployed our technology and infrastructure in places such as the National Theatre of China in Shanghai and the Amsterdam Dance Event in the Netherlands. Our approach is to seamless connecting fashion and textiles with sensing technology, and with the environment. The final objective is to create intelligent and empathic systems that can react to the audience and their experience. Video.