Design a site like this with WordPress.com
Get started

Short Course 2023

All the new content will be available by December 2022.

Next meet-up: 17h November 5PM CET. Click to join.

Course Overview (NEW structure!)

  1. A formal context and definitions of “AI interpretability”. Connection of the technical meaning to the social interpretation.
  2. The “where and why” of interpretability. Where should we apply interpretability and how should we choose methods depending on the scopes.
  3. Transparent models and interpretable design. Simple linear regression, suppressor variables, interpretable decision sets and Explainable Boosting Machines
  4. The “three dimensions” of interpretability: dataset exploration, output explanation, model representations. Includes: Activation Maximization, LIME surrogates, Class Activation maps and Concept Attribution methods seen in details with hands-on exercises.
  5. Plug-and-play post-hoc explainability toolboxes (will be made available in November)
  6. Evaluation of interpretability methods. Includes the explainability benchmark and Quantus. (will be made available in November)

Advanced Topics

  1. Causality and causal explanation generation: Causal analysis for Interpretability, presentation by Sumedha Singla (Ph.D. student)
  2. Attention and explanation: invited speaker TBD
  3. Interpretability-driven model performance boosting

Invited speakers (recorded videos)

  • Pitfalls of Saliency Map Interpretation in Deep Neural Networks, presentation by Suraj Srinivas (Ph.D.)
  • Causal analysis for Interpretability, presentation by Sumedha Singla (Ph.D. student)
  • From attention models and eye tracking to explainability. Invited talk by Prof. Jenny Benoit Pineau
  • LIME for Medical Imaging data, presentation by Iam Palatnik (Ph.D.)

Upcoming Meetings:

17.11.22 5PM CET Nataliia Molchanova (Ph.D. student) : On interpretability for visual segmentation tasks (reading group). Click to join.

Prerequisites:

Confidence in linear algebra, probability, machine learning. Experience with Python, numpy, tensorflow.

This semester the course will be organized mostly for self-study. In this way we can increase the number of participants. If you need supervision on the assignments, please fill in the specific request form and we will reply to you o guarantee proper supervision we keep the number of participants limited. Make sure to reserve a spot for this or the fall semester by registering.

Spring semester 2022 – available content

1: The Where and Why of Interpretability

Prof. Henning Müller and Dr. Mara Graziani

4: Concept-based Interpretability

Dr. Mara Graziani

7: Pitfalls of Saliency methods (NEW)

Dr. Suraj Srinivas

Questa immagine ha l'attributo alt vuoto; il nome del file è 2020-three-quarters-3.png

2: The three dimensions of interpretability

Dr. Mara Graziani

5: LIME for Medical Imaging (NEW)

Dr. Iam Palatnik de Sousa

8: Evaluate your explanations

Anna Hedström (Ph.D. student) and Mara Graziani

3: From attention and eye tracking to explainability 

Prof. Jenny Benois-Pineau, University of Bordeaux

6: Causal Analysis for Interpretability (NEW)

Dr. Sumedha Singla

The course can be included to the AIDA curriculum for the International Artificial Intelligence Doctoral Academy.

Thanks to the funding of AI4Media, this course is open to all and has no registration fees.

AI4Media is a centre of excellence delivering next generation AI research and training at the service of the media, the society and democracy. The course is part of the AIDA PhD programme and advertised by the Swiss AI Meetup community.