Softskills Seminar

Content

This course is an obligatory course of the M2 of the Master’s program “Data AI” of the Institut Polytechnique de Paris - open to students of other programs as well. The purpose of this course is to train students to give scientific presentations.

Every student chooses one research paper from the list of proposed papers. The student then prepares a 20min presentation about this paper. For this purpose, she/he can request the help of the advisor of the paper (by email and/or by meeting with them). The student then gives the presentation in the allocated time slot of the Softskills seminar, in the presence of the lecturer. Students are warmly encouraged to take into account the advice on giving good talks dispensed during the first session.

Each presentation is followed by a question-answer session, where both the students and the lecturers can ask the presenter questions about the paper. To animate this, each student is assigned to some other paper as the “devil’s advocate”. In this role (which is not known to the other students), she or he prepares some questions for the presenter. However, all students are invited to participate in the question-answer session.

Grading

The course is graded by

Schedule

The course takes place on Tuesday afternoons (13:30-16:30) at Telecom Paris.
September 10th: Introduction (Room 1C47)
Given by the lecturer, Fabian Suchanek
  1. Introduction
  2. How to give good talks
  3. How to do a PhD
September 17th (Room 1C39)
This day and all following days are student talks. The list gives the name of the accompanying lecturer, the number of the paper, the paper, and the student (once the student has been determined)
  1. Mehwish Alam 1: LOFTQ: Lora-Fine-Tuning-Aware Quantization For Large Language Models (Pierre CESAR)
  2. Mehwish Alam 2: Knowledge Fusion Of Large Language Models (Ivanina IVANOVA)
  3. Mehwish Alam 3: Reasoning On Graphs: Faithful And Interpretable Large Language Model Reasoning (William Liaw)
September 24th (Room 1C47)
  1. Matthieu Labeau 1: Efficient Nearest Neighbor Language Models (Matteo DENIS)
  2. Matthieu Labeau 2: From Pretraining Data to Language Models to Downstream Tasks: Tracking the Trails of Political Biases Leading to Unfair NLP Models (Chenwei Wan)
  3. Matthieu Labeau 3: Evaluating the Deductive Competence of Large Language Models (Amal MANI)
  4. Fabian Suchanek 1: GenIE: Generative Information Extraction (Eddie GROH)
October 1st (Room 1C47)
  1. Nils Holzenberger 1: Constrained Language Models Yield Few-Shot Semantic Parsers (Cynthia FUERTES PANIZO)
  2. Nils Holzenberger 2: Neural Module Networks for Reasoning over Text (Abhay MATHUR)
  3. Nils Holzenberger 3: Scalable Methods for Annotating Legal-Decision Corpora (Mark Daychman)
  4. Fabian Suchanek 2: Model Agnostic Supervised Local Explanations (Florian Morel)
October 8th (Room 1C39)
  1. Pietro Gori 1: A Simple Framework for Contrastive Learning of Visual Representations (Clément Laroudie)
  2. Mounîm A. El Yacoubi 1: Self-Supervised Pre-training for Time Series Classification (Matin Zivdar)
  3. Mounîm A. El Yacoubi 2: Temporal Rule-Based Counterfactual Explanations for Multivariate Time Series (Adnane El Bouhali)
  4. Fabian Suchanek 3: Discovering Denial Constraints (Gatien Roujanski)
October 15th (Room 1D19)
  1. Tiphaine Viard 1/Maria Boritchev 2: Tackling Language Modelling Bias in Support of Linguistic Diversity (Dmitrii Timkin)
  2. Tiphaine Viard 2: Algorithmic Unfairness through the Lens of EU Non-Discrimination Law (Mustafa Hayri Bilgin)
  3. Tiphaine Viard 3: AI as super-controversy: Eliciting AI and society controversies with an extended expert community in the UK (Jordan Sassoon)
  4. Sophie Chabridon 1: "Like rearranging deck chairs on the Titanic"? Feasibility, Fairness, and Ethical Concerns of a Citizen Carbon Budget for Reducing CO2 Emissions (Sri Raam APPAKUTTI PALANI)
October 22nd (Room 1C39)
  1. Julien Alexandre dit Sandretto 1: Revising Hull and Box Consistency (Kazeto Fukasawa)
  2. Julien Alexandre dit Sandretto 2: Runge–Kutta Theory And Constraint Programming (Eduardo GUIMARAES)
  3. David Filliat 1: ViNT: A Foundation Model for Visual Navigation (Kian Bakhtari)
  4. David Filliat 2: Offline Reinforcement Learning for Visual Navigation (Antoine Domingues)
  5. Maria Boritchev 1: On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? (Tim Luka Horstmann)
October 29th
Vacation, no course
November 5th (Room 1C43)
  1. Winston Maxwell 1: A Critical Analysis of the Largest Source for Generative AI Training Data: Common Crawl (Yann Millet)
  2. Winston Maxwell 2: Code is law by Lawrence Lessig (please contact Winston if you choose this paper) (RENOUT Nicolas)
  3. Zacchary Sadeddine 1: One SPRING to Rule Them Both: Symmetric AMR Semantic Parsing and Generation without a Complex Pipeline (Clément Martineau)
  4. Ada Diaconescu 1: Quantifying causal emergence shows that macro can beat micro (Mathilde Bonin)
  5. Tom Calamai 1: Do Machine Learning Models Memorize or Generalize? (Roel George)