Invited Speakers

Dr. Vivek K. Singh  is the founding Director of the Behavioral Informatics Lab and an Associate Professor in the School of Communication and Information at Rutgers University. Before joining Rutgers, he was a post-doctoral researcher at the MIT Media Lab. He holds a Ph.D. in Computer Science from the University of California, Irvine. His work has appeared in leading disciplinary and interdisciplinary publication venues (e.g. Science, ACM Multimedia, ACM CHI) and has been covered by popular media (e.g. New York Times, BBC, Wall Street Journal). He was selected as one of the “Rising Star Speakers” by ACM SIG-Multimedia in 2016.

Title: Auditing and Controlling Algorithmic Bias

Abstract: Today Artificial Intelligence algorithms are used to make multiple decisions affecting human lives, and many such algorithms, such as those used in parole decisions, have been reported to be biased. In this talk, I will share some recent work from our lab on auditing algorithms for bias, designing ways to reduce bias, and expanding the definition of bias. This includes applications such as image search, health information dissemination, and cyberbullying detection. The results will cover a range of data modalities, (e.g., visual, textual, and social) as well as techniques such as fair adversarial networks, flexible fair regression, and fairness-aware fusion.

Dr. Katharina Reinecke  is Associate Professor of Computer Science & Engineering at the University of Washington. Her research in human-computer interaction explores how humans’ interaction with technology varies depending on their cultural, geographic, and demographic background. To find out about these differences, she conducts large-scale online studies with our virtual lab LabintheWild. LabintheWild is an experiment platform for conducting behavioral studies that lets participants compare themselves to others in exchange for study participation. Using the data from our LabintheWild experiments, her research group builds systems that are able to adapt to these differences and that are more aesthetically appealing, more intuitive, and more usable for specific user groups.

Title: Bias and unintended consequences of WEIRD technology

Abstract: Information technology is most commonly designed by people who are Western, Educated, Industrialized, Rich, and Democratic — or short, WEIRD. This often introduces biases and unintended consequences for less WEIRD people in this world. In this talk, I will walk through several examples of how technology is often biased against less WEIRD people and explain how my lab uses our experiment platform LabintheWild.org to find out how we may design information technologies differently to be more inclusive and fair to all.

Dr. Cristian Canton is a research manager at Facebook where he currently leads the AI Red Team for the company, focused on understanding and prevention of misuses of AI; he is also the engineering manager for the DeepFakes Detection Challenge (DFDC). In the past, he managed the computer vision team within the objectionable and harmful content domain (i.e. detect and remove all the bad visual content in Facebook). From 2012-16, he was at Microsoft Research in Redmond (USA) and Cambridge (UK) where he worked on large scale Computer Vision and machine learning problems. From 2009-2012, he was the lead engineer at Vicon (Oxford), bringing CV to produce visual effects for cinema industry. In the past, he organized several workshops in top-tier conferences (MediaForensics, DeepVision, among others).

Title: Abuses and misuses of AI: prevention vs reaction

Abstract: As AI is becoming more ubiquitous and part of almost every aspect of our lives, professional and personal, it is necessary to consider potential harmful aspects of it: from exploitation of AI weaknesses for nefarious purposes (e.g. adversarial attacks against classifier) to abuses of harmless technologies (e.g. Deepfakes to spread misinformation). Reactively addressing these AI mis/ab-uses when they have been already executed has proven to be costly in many dimensions (human, economic, etc.) hence a more preventive approach emerges as an alternative. In this talk we will do a walkthrough of some of these adversarial AI scenarios and how quantifying and understanding risks while designing AI systems is becoming an imperative for researchers and practitioners.

Comments are closed.