This course is an obligatory course of the M2 of the Master’s program “Data AI” of the Institut Polytechnique de Paris - open to students of other programs as well. The purpose of this course is to train students to give scientific presentations.
Every student chooses one research paper from the list of proposed papers. The student then prepares a 20min presentation about this paper. For this purpose, she/he can request the help of the advisor of the paper (by email and/or by meeting with them). The student then gives the presentation in the allocated time slot of the Softskills seminar, in the presence of the lecturer. Students are warmly encouraged to take into account the advice on giving good talks dispensed during the first session.
Each presentation is followed by a question-answer session, where both the students and the lecturers can ask the presenter questions about the paper. To animate this, each student is assigned to some other paper as the “devil’s advocate”. In this role (which is not known to the other students), she or he prepares some questions for the presenter. However, all students are invited to participate in the question-answer session.
The course is graded by
20% oral participation (as devil’s advocate and in general).
The course takes place in the second period of the winter semester 2022-2023, on Wednesday morning 9:00-12:00 in Amphi 3 at Télécom Paris.
How to give good talks
How to do a PhD
- Louis Jachiet 1: “The case for learned index structures ” (Nehmat)
- Sophie Chabridon 1: “If it didn’t happen, why would I change my decision?”: How Judges Respond to Counterfactual Explanations for the Public Safety Assessment ” (Caio)
- Pietro Gori 1: “A Simple Framework for Contrastive Learning of Visual Representations” (Bruno)
- Fabian Suchanek 1: “Neural Databases”
- Fabian Suchanek 2: “Differentiable Learning of Logical Rules for Knowledge Base Reasoning” (Ümit)
- Fabian Suchanek 3: “Differentiable Reasoning on Large Knowledge Bases and Natural Language” (Maximilien)
- Tiphaine Viard 1: “Fairness and Abstraction in Sociotechnical Systems ” (Jana)
- Ada Diaconescu 1: “Entropy and the Self-Organization of Information and Value ”
2022-12-14: Talks (also online)
- Mounîm A. El Yacoubi 1: “Attention Is All You Need ” (Andjela)
- Mounîm A. El Yacoubi 2: “Explaining Convolutional Neural Networks using Softmax Gradient Layer-wise Relevance Propagation ” (Ariel)
- Ioana Manolescu 2: “Chukonu: A Fully-Featured High-Performance Big Data Framework that Integrates a Native Compute Engine into Spark ” (Paola)
- Ioana Manolescu 1: “Babelfish: Efficient Execution of Polyglot Queries ” (Alberic)
- Jean-Louis Dessalles 2: “Emergent Symbols through Binding in External Memory ” (François)
- Florence Tupin 1: “Noise2Noise: Learning Image Restoration without Clean Data ” (Yann)
- Jean-Louis Dessalles 1: “Solving quantitative reasoning problems with language models ”
- Chloé Clavel 1: “Relation-aware graph attention networks with relational position encodings for emotion recognition in conversations ” (Thomas)
- Winston Maxwell 1: “The accuracy, fairness, and limits of predicting recidivism ” (Chenxi)
- Winston Maxwell 2: “Why AI is harder than we think ” (Haileleul)
- Amal Dev Parakkat 2: “SurfaceBrush: From Virtual Reality Drawings to Manifold Surfaces” (Putian)
- Amal Dev Parakkat 1: “Detecting Viewer-Perceived Intended Vector Sketch Connectivity”
- Amal Dev Parakkat 5: “Self-Sampling for Neural Point Cloud Consolidation” (Nikolai)
- Samuel Huron 2: “Narrative Visualization: Telling Stories with Data” (Salma)
- Amal Dev Parakkat 4: “Perception-Driven Semi-Structured Boundary Vectorization”
- Amal Dev Parakkat 6: “Learning an Intrinsic Garment Space for Interactive Authoring of Garment Animation”
- Julien Alexandre dit Sandretto 1: “Revising Hull and Box Consistency ”
- Julien Alexandre dit Sandretto 2: “Runge-Kutta Theory And Constraint Programming”
- Samuel Huron 1: “Toward visualization for games: Theory, design space, and patterns” (link now available)
- Samuel Huron 3: “Design and Evaluation of Visualization Techniques to Facilitate Argument Exploration”
- Mehwish Alam 1: “Do Pre-trained Models Benefit Knowledge Graph Completion? A Reliable Evaluation and a Reasonable Approach ” (Maya)
- Mehwish Alam 2: “CAKE: A Scalable Commonsense-Aware Framework For Multi-View Knowledge Graph Completion” (Yixin)
- Mehwish Alam 3: “Time-Aware Language Models as Temporal Knowledge Bases ” (Jules)
- Amal Dev Parakkat 3: “Fashion Transfer: Dressing 3D Characters from Stylized Fashion Sketches” (Rebecca)