import os
import tensorflow as tf
import matplotlib.pyplot as plt
import numpy as np
from tensorflow.keras.applications.inception_v3 import InceptionV3, decode_predictions
from tensorflow.keras.preprocessing.image import load_img, img_to_array
from alibi.explainers import AnchorImage
Cat detection with explainers
Pet image detection
The growing availability of big data has increased the benefits of using complex models, forcing data scientists to choose between accuracy and interpretability of a model’s output. Adding explainability methods to models that are not easily understood helps:
- Ensure transparency algorithmic decision-making
- Identify potential bias in the training data
- Give a deeper understanding of the process being modeled
These methods are able to give insight into why your model generates specific outputs; explainability algorithms are especially useful in highly regulated industries (ie, pinpoint the attributes that caused someone to be denied/approved a home loan). We’ll demonstrate an anchor explainer in this notebook to better understand why a generic image detection model is creating certain outputs using the open-source library alibi.
To begin, we load and format the photo into a machine readable array.
def image_pipeline(image_name):
#Format custom picture
= load_img(image_name, target_size=(299, 299))
resized_image = img_to_array(resized_image)
numpy_image = np.expand_dims(numpy_image, axis=0)
image_batch
= InceptionV3(weights='imagenet')
model = model.predict(image_batch)
preds
#Display results
= decode_predictions(preds, top=3)
label 0])
plt.title(label[ plt.imshow(resized_image)
= 'cat.jpg'
image_name image_pipeline(image_name)
The predict function returns the predictions via Softmax, which mean that the prediction can be translated as the probability that the image falls into one of the categories given in training. Even though we see this as a cat, the model gives a 100% probability that the image shows a pitcher. There are a few things to note here.
- This is a photo I took of my own cat, so I can confidently know that the model has never seen the picture before. To a human eye, it is (hopefully) pretty obvious that the image is of a cat. However, oftentimes training data, especially images, does not accurately reflect photos that are taken by you or I. For example, the training photos may all have centered images in good lighting with nothing in the background, which is unrealistic for everyday images.
- This is a Softmax output. The model only gives us the names of its best guesses, not how strong the prediction is. This is a subtle differentiation. The model believes that pitcher is definitely the best guess, but it could only be 1% certain in this prediction. To solve this, we could add new outcome metrics, or tweak the model.
Before we do any of that though, it could be useful for us to better understand how the model is making this prediction. We’ll use something called an explainer to dive deeper.
Explainability
We will use a local version of the model to build an anchor explainer. This will show us what parts of the photo the model used in order to give the “pitcher” prediction.
def explainability_pipeline(image_name):
# format custom picture
= load_img(image_name, target_size=(299, 299))
resized_image = img_to_array(resized_image)
numpy_image = np.expand_dims(numpy_image, axis=0)
image_batch
# set hyperparameters
= (299, 299, 3)
image_shape = 'slic'
segmentation_fn = {'n_segments': 15, 'compactness': 20, 'sigma': .5}
kwargs
# load model
= InceptionV3(weights='imagenet')
model
# explainer, from alibi library
= AnchorImage(model, image_shape, segmentation_fn=segmentation_fn,
explainer =kwargs, images_background=None)
segmentation_kwargs
= image_batch[0]
cat_image = explainer.explain(cat_image, threshold=.95, \
explanation =.5, tau=0.25)
p_sample
plt.imshow(explanation.anchor)
explainability_pipeline(image_name)
skimage.measure.label's indexing starts from 0. In future version it will start from 1. To disable this warning, explicitely set the `start_label` parameter to 1.
It looks as though the model uses primarily the background to create this prediction, so it comes as no surprise that the classification is wildly incorrect. With this information in mind, data scientists may decide to go back and create a more robust exploration of the data and model. It may be the case that the training data of cats has only solid backgrounds, different lighting, different color/hair length cat, or some other feature that caused this image to not be identified correctly.
Explainers will not fix the model itself. However, they are useful tools for data scientists to build well-architected models by exposing bias in training data and giving transparency to black-box models.
Try your own pets!
Either download or git clone this demo repo to your local machine. Import your own photo to the data
folder. Change your-image-name
in the final cell (see below for example) to match your image’s name, and press run!
= '../data/your-image-name.jpg'
custom_image_name
image_pipeline(custom_image_name) explainability_pipeline(custom_image_name)