Design a site like this with WordPress.com
Get started

Evaluating interpretability

Written in collaboration with Anna Hedström, Ph.D. candidate at Technische Universität Berlin and ideator of the Quantus toolbox.

There is no point in generating misleading or wrong explanations. Evaluating the interpretability outcomes is a priority in the field.

The development of explainability techniques should always contain a part where we evaluate the quality and trustworthiness of the explanations. As I explained in the meet up on Nov. 19th 2021, the explanation receiver should be targeted at all steps of the evaluation. However, there are multiple types of receivers, e.g. users, developers, etc..

Watch the full lecture in the video below. Take notes on the following: (i) the type of receiver (ii) the types of risk involved in generating explanations (iii) the multiple steps in XAI evaluation procedures that include users (from the paper “Metrics for Explainable AI: Challenges and Prospects”).

THE QUANTUS TOOLBOX (Content Author: Anna Hedström, Ph.D. candidate Technische Universität Berlin, hedstroem.anna@gmail.com, twitter: @anna_hedstroem)

If you are looking for an easy to apply and intuitive toolbox to evaluate your explainability outcomes on imaging data, I strongly recommend the Quantus Toolbox. The principal creator Anna Hedström kindly agreed to introduce the toolbox for this course. Have a look at the material that she shared below.

MOTIVATION FOR QUANTUS

A simple visual comparison of eXplainable Artificial Intelligence (XAI) methods is often not sufficient to decide which explanation method works best as shown exemplarily in Figure a) for four gradient-based methods — Saliency (Mørch et al., 1995; Baehrens et al., 2010), Integrated Gradients (Sundararajan et al., 2017), GradientShap (Lundberg and Lee, 2017) or FusionGrad (Bykov et al., 2021), yet it is a common practice for evaluation XAI methods in absence of ground truth data. Therefore, Quantus was developed, an easy-to-use yet comprehensive toolbox for quantitative evaluation of explanations — including 30+ different metrics.

With Quantus, we can obtain richer insights on how the methods compare e.g., b) by holistic quantification on several evaluation criteria and c) by providing sensitivity analysis of how a single parameter e.g. the pixel replacement strategy of a faithfulness test influences the ranking of the XAI methods.

The diagram in Figure b gives an overview of how each method performs according to five evaluation metrics, namely faithfulness, localisation, robustness, randomization and complexity. The meaning of each of these metrics is explained below.  The diagram offers a quantitative and qualitative evaluation at the same time. For example, we can see that FusionGrad outperforms the other methods in terms of localisation, robustness and complexity, but it does not give the best guarantees in terms of faithfulness. Depending on the objective of the interpretability analysis, one may use the diagram to prefer one method over the other to generate explanations. For instance, Saliency seems to be the best method to obtain explanations with high faithfulness.  Figure c shows how one parameter, that is the pixel replacement strategy of a faithfulness test can influence the evaluation outcome, as specified by the ranking of the XAI methods (1-4). Contrary to our intuition where the ranking would remain consistent over different metric parameterisations (this would mean one colour per bar in Figure c), we can observe that the ranking significantly differs in the different experimental settings (as seen by the split of colours in the different bars). 

EVALUATION METRICS

Quantus started with the goal of collecting existing evaluation metrics that have been introduced in the context of XAI research — to help automate the task of XAI quantification. Along the way of implementation, it became clear that XAI metrics most often belong to one out of six categories i.e., 1) faithfulness, 2) robustness, 3) localisation 4) complexity 5) randomisation or 6) axiomatic metrics. The library contains implementations of these six categories. 

  • Faithfulness (↑) quantifies to what extent explanations follow the predictive behaviour of the model, asserting that more important features affect model decisions more strongly 
  • Robustness (↓) measures to what extent explanations are stable when subject to slight perturbations in the input, assuming that the model output approximately stayed the same
  • Randomisation (↑) tests to what extent explanations deteriorate as the data labels or the model, e.g., its parameters are increasingly randomised
  • Localisation (↑) tests if the explainable evidence is centred around a region of interest, which may be defined around an object by a bounding box, a segmentation mask or a cell within a grid 
  • Complexity (↓) captures to what extent explanations are concise, i.e., that few features are used to explain a model prediction 
  • Axiomatic (↑) measures if explanations fulfil certain axiomatic properties

Note that to compute the localisation of an explanation, Quantus requires a reference annotation in form of a binary mask. These masks usually come with most object segmentation datasets or can be computed separately. 

GET STARTED WITH QUANTUS

The following will give a short introduction to how to get started with Quantus. Note that this example is based on the PyTorch framework, but we also support TensorFlow, which would differ only in the loading of model, data and explanations. To get started with Quantus, you need: (i) A model (model), inputs (x_batch) and labels (y_batch) (ii) some explanations with ground truth annotations to run the evaluation (a_batch)

Step 1. Load data and model

Let’s first load the data and model. In this example, a pre-trained LeNet available from Quantus for the purpose of this tutorial is loaded, but generally, you might use any Pytorch (or TensorFlow) model instead. To follow this example, one needs to have quantus and torch installed, by e.g., pip install ‘quantus[torch]’.

import quantus
from quantus.helpers.model.models import LeNet
import torch
import torchvision
from torchvision import transforms

# Enable GPU.
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")

# Load a pre-trained LeNet classification model (architecture at quantus/helpers/models).

model = LeNet()
if device.type == "cpu":
    model.load_state_dict(torch.load("tests/assets/mnist", map_location=torch.device('cpu')))
else:
    model.load_state_dict(torch.load("tests/assets/mnist"))

# Load datasets and make loaders.
test_set = torchvision.datasets.MNIST(root='./sample_data', download=True,
transforms=transforms.Compose([transforms.ToTensor()]))
test_loader = torch.utils.data.DataLoader(test_set, batch_size=24)

# Load a batch of inputs and outputs to use for XAI evaluation.
x_batch, y_batch = iter(test_loader).next()
x_batch, y_batch = x_batch.cpu().numpy(), y_batch.cpu().numpy()

Step 2. Load explanations

We still need some explanations to evaluate. For this, there are two possibilities in Quantus. You can provide either: (i) a set of re-computed attributions (ii) an arbitrary explanation function, e.g. the built-in method quantus

(i) Using pre-computed explanations

Quantus allows you to evaluate explanations that you have pre-computed, assuming that they match the data you provide in x_batch. Let’s say you have explanations for Saliency and Integrated Gradients already pre-computed.  In that case, you can simply load these into corresponding variables a_batch_saliency and a_batch_intgrad:

a_batch_saliency = load("path/to/precomputed/saliency/explanations")
a_batch_saliency = load("path/to/precomputed/intgrad/explanations")

Another option is to simply obtain the attributions using one of many XAI frameworks out there, such as Captum, Zennit, tf.explain, or iNNvestigate. The following code example shows how to obtain explanations (Saliency and Integrated Gradients, to be specific) using Captum:

import captum

from captum.attr import Saliency, IntegratedGradients

# Generate Integrated Gradients attributions of the first batch of the test set.

a_batch_saliency = Saliency(model).attribute(inputs=x_batch, target=y_batch, abs=True).sum(axis=1).cpu().numpy()
a_batch_intgrad = IntegratedGradients(model).attribute(inputs=x_batch, target=y_batch, baselines=torch.zeros_like(x_batch)).sum(axis=1).cpu().numpy()

# Save x_batch and y_batch as numpy arrays that will be used to call metric instances.
x_batch, y_batch = x_batch.cpu().numpy(), y_batch.cpu().numpy()

# Quick assert.
assert [isinstance(obj, np.ndarray) for obj in [x_batch, y_batch, a_batch_saliency, a_batch_intgrad]]

(ii) Passing an explanation function

If you don’t have a pre-computed set of explanations but rather want to pass an arbitrary explanation function that you wish to evaluate with Quantus, this option exists. For this, you can for example rely on the built-in quantus.explain function to get started, which includes some popular explanation methods (please run quantus.available_methods() to see which ones). Examples of how to use quantus.explain or your own customised explanation function are included in the next section.

drawing

As seen in the above image, the qualitative aspects of explanations may look fairly uninterpretable — since we lack ground truth of what the explanations should be looking like, it is hard to draw conclusions about the explainable evidence. To gather quantitative evidence for the quality of the different explanation methods, we can apply Quantus.

Step 3. Evaluate with Quantus

Quantus implements XAI evaluation metrics from different categories, e.g., Faithfulness, Localisation and Robustness etc which all inherit from the base quantus.Metric class. To apply a metric to your setting (e.g., Max-Sensitivity) it first needs to be instantiated:

metric = quantus.MaxSensitivity(nr_samples=10,
lower_bound=0.2,                
norm_numerator=quantus.fro_norm,                         norm_denominator=quantus.fro_norm,
perturb_func=quantus.uniform_noise,
similarity_func=quantus.difference)

and then applied to your model, data, and (pre-computed) explanations:

scores = metric(model=model,
    x_batch=x_batch,
    y_batch=y_batch,
    a_batch=a_batch_saliency,
    device=device
)

Alternatively, instead of providing pre-computed explanations, you can employ the quantus.explain function, which can be specified through a dictionary passed to explain_func_kwargs.

scores = metric(
    model=model,
    x_batch=x_batch,
    y_batch=y_batch,
    device=device,
    explain_func=quantus.explain,
    explain_func_kwargs={"method": "Saliency"}
)

See Getting started tutorial to run code similar to this example. For more information on how to customise metrics and extend Quantus’ functionality, please see Getting started guide.

EXTRA TUTORIALS

Further tutorials are available that showcase the many types of analysis that can be done using Quantus. For this purpose, please see notebooks in the tutorials folder which includes examples such as:

… and more.

THE OPENXAI BENCHMARK FOR TABULAR DATA

If you are working on tabular data, I also recommend having a look at https://open-xai.github.io, a strong initiative in the direction of evaluating XAI based on 1) faithfulness 2) stability and 3) fairness.

Advertisement
%d bloggers like this: