Poster at IJCAI 2015: Emotions in Argumentation

In addition to our paper at IJCAI 2015 we also presented a poster that we include below:

poster_seempad_fg_v3

Get the PDF of the Poster IJCAI 2015

 

Publication of the argumentation and emotions dataset

We are pleased to announce the online publication of the first dataset of textual arguments annotated with emotions, resulting from the first set of empirical experiments addressed in the SEEMPAD project.

The dataset is available at here.

More details about the content of the dataset as well as its structure can be found in the IJCAI paper (see Publications page).

Visit of Sahbi Benlamine to the Wimmics team

Sahbi Benlamine, currently a PhD student at the University of Montreal (Heron Laboratory) under the supervision of Prof. Claude Frasson, is visiting the Wimmics team in the context of the SEEMPAD project from May 11th to May 23rd, 2015.

New publication accepted — IJCAI-2015

The following paper about the results of the first set of experiments of the SEEMPAD project has been accepted:

Sahbi Benlamine (University of Montreal);  Maher Chaouachi (University of Montreal);  Serena Villata (Inria Sophia Antipolis);  Elena Cabrio (Inria Sophia Antipolis);  Claude Frasson (University of Montreal);  Fabien  Gandon (Inria Sophia Antipolis). Emotions in Argumentation: an Empirical Evaluation. In Proceedings of the 24th International Joint Conference on Artificial Intelligence (IJCAI-2015). Buenos Aires, Argentina, July 25-31, 2015.

More details in the Publications page.

First Annual report and First experimental results

Scientific Progresses

During the first year, we have defined the protocol for the first experimental setting, which will represent the first stage of the proof-of-concept. The goal of the first experiment is to address a feasibility study of the annotation of a corpus of natural language arguments with emotions. The experiment involved a group of 20 participants, recruited by the Heron Lab. In particular, the first experiment has considered the following steps[1]:

  • Starting from an issue to be discussed provided by the animators, the aim of the experiment is to collect the arguments proposed by the participants
  • These arguments are then associated with the emotional component detected through apposite devices of the Heron Lab. More precisely, the workload/engagement emotional states and the facial emotions of the participants are extracted during the debate, using an EEG headset and a Face Emotion Recognition tool respectively.
  • In a post-processing phase on the collected data, we have synchronized the arguments put forward at instant t with the emotional indexes we retrieved.
  • The output of this post-processing phase (ongoing) will result in an argumentation graph representing each discussion addressed by the discussion groups. These argumentation graphs connect the arguments to each other by a support or an attack relation, and they will be labeled with the source that has proposed the argument, and the emotional state of the source itself and of the other participants at the time when the argument has been put on the table.

Visits

Fabien Gandon visited UQAM and UdM in July 2014 to:

  • Meet the three researchers involved in setting up capitation devices and choose the material and bases of the experiment.
  • Test the software for cognitive charge analysis.
  • Discuss the different options for the size of the groups and the number and combinations of devices available.

Claude Frasson visited Inria in June and August 2014 to:

  • Explain the devices and the tools available in the Heron Lab to detect emotions and discuss which ones better suit the needs of the SEEMPAD project.
  • Discuss the protocol of the first experimental setting and the practical organization of the experiment in the Heron Lab (participants’ recruitment, questionnaires to be filled in before and after the experiment, groups and turns).

 

Video-conferences

The Inria team (Cabrio, Gandon, Villata) and the Heron team (Benlamine, Chaouachi, Frasson) had numerous video-conferences during this first year of the SEEMPAD project:

  • March 2014: presentation of the two teams and related research topic.
  • May 2014: definition of the first steps to be addressed in the project, decision about the goal of the first experiment.
  • June 2014: definition of a first draft of the protocol for the experimental setting.
  • July 2014: refinement of the experimental protocol.
  • August 2014: definition of the topics to be used for the debates in the experiment.
  • September 2014: discussion about the participants’ recruitment document and the post-experimental questionnaire.

 

Pre-experimental session

Before starting the first experiment, the two teams have simulated a first experimental session in order to verify the feasibility of the distributed experimental setting. This pre-experimental session has been done mid-October, 2014. The successful result has encouraged us to continue with the experiment.

 

Experimental sessions

Each experimental session involved 4 participants and two moderators. An experimental session consisted of 2 debates of about 20 minutes each, where the participants and the moderators debate together. We involved a total number of 20 participants for 12 debate topics. The experimental sessions have been scheduled from November 18th to November 25th, 2014.

Production

No publication has been finalized yet but the first synchronized dataset is about to be available.

First Experimental Setting – Protocol

AN EXPERIMENT FOR STUDYING THE FEASIBILITY OF THE ANNOTATION OF A CORPUS OF NATURAL LANGUAGE ARGUMENTS WITH EMOTIONS
PROTOCOL – First experimental setting (November 2014)
General goal of the first set of experiments: Feasibility study of the annotation of a corpus of natural language arguments with emotions.
EXPERIMENT #1 (Associating arguments to workload/engagement emotional states detected by an EEG headset and facial emotions detected by a Face Emotion Recognition tool)
Goal:
Starting from an issue to be discussed provided by the animators, the aim of the experiment is to collect the arguments proposed by the participants and to associate such arguments to the workload/engagement emotional states and to the facial emotions of the participants. During a post-processing phase on the collected data, we will synchronize the arguments put forward at instant t with the emotional indexes we retrieved. We will build the resulting argumentation graph for each discussion addressed by the discussion groups. These argumentation graphs will connect arguments to each others by a support or an attack relation. Finally, argumentation graphs will be labelled with the source who has proposed the argument, and the emotional state of the source itself and of the other participants at the time of the introduction of the argument in the discussion.
Terminology:
  • Argument: an argument is the piece of text that is proposed by the participants of the debate. Typically, arguments have the goal to promote the opinion of the debater in the debate.
  • Opinion: it is the overall opinion of the debater about the issue to be debated, i.e., “Ban animal testing”. The opinion is promoted in the debate through arguments, that will support or attack the arguments proposed in the debate by the other participants (if the opinions converge then there will be a support, if the opinions diverge then there will be an attack).
Topics of discussion:
Source of the issues to be debated: http://idebate.org/debatabase
Roles:
  • Participant: he is expected to provide his own opinion about the issue of the debate proposed by the animators, and to argue with the other participants in order to convince them (in case of initial disagreement) about the goodness of his viewpoint.
  • Animator: he is expected to propose to the participants of the debate the initial issue to be discussed. In case of lack of active argumentation among the participants, the animator will propose pro and con arguments (with respect to the main issue) to reactivate the discussion. These pro/con arguments are selected from a fixed set of arguments extracted from the iDebate platform.
Involved people and location:
  • Participants: 4 participants for each discussion group (each participant will be placed far from the other participants, two participants will be in a room and two others in another one).
  • Animators: 1 animator for each discussion group (the animator will be placed in a room alone too). The moderator will interact with the participants through the debate platform.
Debate platform
The debate platform is available at http://webchat.freenode.net/ (IRC network). It is sufficient to connect to that webpage, select the nickname (participant1, participant2, etc), select channel #seempadDebate1.
If a user interface is preferred, we suggest to use Pidgin (https://www.pidgin.im/ ). Instructions to use Pidgin on the network Freenode: https://freenode.net/irc_servers.shtml
Devices for participants:
  • 1 laptop or 1 desktop device equipped with internet access and camera to detect facial emotions.
  • 1 EEG headset (associated to engagement/workload indexes detection system).
Devices for animatiors: 
  • 1 laptop or 1 desktop device equipped with internet access
Procedure:
  • Phase 1:  Familiarization of the participants with the debate platform;
  • Phase 2: Debate – participants will be submitted to two debates for a maximum of 30 minutes
  • The animator provides the debaters with the topic to be discussed;
  • The animator asks each participant to provide a general statement about his/her opinion concerning the topic;
  • Each participant expose his/her opinion to the others;
  • Participants are asked to comment on the expressed opinions;
  • If needed (no debate among the participants), the animator posts an argument to be commented;
  • Phase 3: Debriefing
  • Each participant is asked to complete a short questionnaire about his experience in the debate
  • What was your starting opinion about the discussed topic before entering into the debate?
  • What is your final opinion about the discussed topic after the debate?
  • If so, why did you change your mind (i.e., which was the argument(s) that has made you change your mind)?
Duration of a debate: about 15 minutes.
Measured variables:
  • engagement (measurement tool: EEG headset)
  • workload (measurement tool: EEG headset)
  • positive/negative attitude with respect to the issue of the debate (technique: argumentation)
  • positive/negative attitude with respect to a particular argument proposed in the discussion (technique: argumentation)
  • positive/negative attitude with respect to a particular participant of the discussion (technique: argumentation)
  • emotions: (Neutral, Happy, Sad, Angry, Surprised, Scared and Disgusted) (measurement tool: FaceReader software Emotion Recognition from facial expression)
Post-processing phase:
  1. Synchronize the argumentation (i.e., the arguments proposed at time t) with the emotional indexes retrieved using the EEG headset and the Facial Emotion Recognition.
  1. Build the argumentation graph detecting support and attack relations among the arguments proposed in each discussion (using methodology defined in [Cabrio&Villata, Argument&Computation 2013]);
  1. Associate each argument to the participant who proposed it in the discussion;
  1. Build a data set of argumentation graphs labelled with the emotional indexes of the participants (each argument is associated to the indexes of the four participants of the debate);
  1. Compute the “winning” opinions in each debate.

Call for participation to an experiment.

Annonce Seempad-1

PDF: Annonce Seempad-1

Review of the devices for experimentation

We did a review of the devices available:

20140710_155717_HDR 20140710_155936_HDR 20140710_160630_HDR 20140710_162800_HDR

Sensing Equipement Quantity
Model Reference
Eye tracking Tobii-Eye Tracker 2x Tobii-TX300 http://www.tobii.com/fr/eye-tracking-research/global/products/hardware/tobii-tx300-eye-tracker/
Eye tracking Tobii-Glasses 2x Tobii-glasses http://www.tobii.com/Global/Analysis/Downloads/User_Manuals_and_Guides/Tobii%20Glasses%20User%20Manual.pdf?epslanguage=en
Eye tracking Tobii-Eye Tracker 1x Tobii-T60 http://www.tobii.com/Global/Analysis/Downloads/User_Manuals_and_Guides/Tobii_T60_T120_EyeTracker_UserManual.pdf
Conductivity BIOPAC 1x BIOPAC-M150 http://www.biopac.com/data-acquisition-system-mp150-system-glp-win
Conductivity PROCOMP-INFINITI 1x(attached to the white coat) SA7500 http://www.thoughttechnology.com/pdf/manuals/SA7510%20rev.%207%20ProComp%20Infiniti%20User%20Manual.pdf
EEG Emotiv-EPOC 1x (2nd is in reparation) Model 1.0 http://emotiv.com/product-specs/Emotiv%20EPOC%20Specifications%202014.pdf
EEG B-Alert 1x B-Alert x24 http://advancedbrainmonitoring.com/xseries/x24/