Home

Short Course on Interpretable AI

We are coming back for a new semester!

The University of Applied Sciences of Western Switzerland is happy to expand the lecture series present this lecture series on AI interpretability for this Autumn 2021, from October 25th to December 5th.

The course can be included to the AIDA curriculum for the International Artificial Intelligence Doctoral Academy.

Course Overview and NEW CLASSES with invited speakers:

  1. The “where and why” of interpretability. Reading of taxonomy papers and perspectives. Mara Graziani
  2. The “three dimensions” of interpretability. Activation Maximization, LIME surrogates, Class Activation maps and Concept Attribution methods seen in details with hands-on exercises. Mara Graziani
  3. From attention models and eye tracking to explainability. Invited talk by Prof. Jenny Benoit Pineau
  4. Concept-based interpretability. Mara Graziani
  5. (NEW) LIME for Medical Imaging data, presentation by Iam Palatnik (Ph.D.)
  6. (NEW) Causal analysis for Interpretability, presentation by Sumedha Singla (Ph.D. student)
  7. (NEW) Pitfalls of Saliency Map Interpretation in Deep Neural Networks, presentation by Suraj Srinivas (Ph.D.)
  8. Evaluation of interpretability methods. Reading of papers and hands-on exercises. Mara Graziani

Prerequisites:

Confidence in linear algebra, probability, machine learning. Experience with Python, numpy, tensorflow.

This semester the course will be organized mostly for self-study. In this way we can increase the number of participants. If you need supervision on the assignments, please fill in the specific request form and we will reply to you o guarantee proper supervision we keep the number of participants limited. Make sure to reserve a spot for this or the fall semester by registering.

1: The Where and Why of Interpretability

Prof. Henning Müller and Mara Graziani (PhD student)

4: Concept-based Interpretability

Mara Graziani (Ph.D. student)

7: Pitfalls of Saliency methods (NEW)

Dr. Suraj Srinivas

Questa immagine ha l'attributo alt vuoto; il nome del file è 2020-three-quarters-3.png

2: The three dimensions of interpretability

Mara Graziani (Ph.D. student)

5: LIME for Medical Imaging (NEW)

Dr. Iam Palatnik de Sousa

8: Evaluate your explanations

Mara Graziani (Ph.D. student). LIVE Teams meeting on date TBD

3: From attention and eye tracking to explainability 

Prof. Jenny Benois-Pineau, University of Bordeaux

6: Causal Analysis for Interpretability (NEW)

Sumedha Singla (Ph.D. student)

Thanks to the funding of AI4Media, this course is open to all and has no registration fees.

AI4Media is a centre of excellence delivering next generation AI research and training at the service of the media, the society and democracy. The course is part of the AIDA PhD programme and advertised by the Swiss AI Meetup community.

Create your website with WordPress.com
Get started