Title

Enhancing the Stability and Quality Assessment of Visual Explanations for Thorax Disease Classification Using Deep Learning

Abstract

Abstract

Deep learning models are widely utilized today, yet their complex internal operations render them non-transparent to most users. Despite efforts to enhance the explainability of black-box models, there remains a need for improvement. Quantifiable performance metrics are essential to evaluate visual explanation quality. Existing techniques rely on manual assessment of visual explanations, limiting scalability. Moreover, many methods require tuning hyperparameters for better explanations, necessitating objective performance metrics.

Another challenge is the instability of certain explainable models, like Local Interpretable Model-agnostic Explanations (LIME). Random perturbations in LIME result in inconsistent explanations due to variations in generated samples. This inconsistency hampers trust and adoption in crucial applications.

In this study, we propose a quantifiable approach to evaluate explainability algorithms for multi-label multi-class thorax disease diagnosis. Using an InceptionV3-based classifier trained on CheXpert, we employ LIME with various segmentation configurations to explain the classifier's decision. We define criteria and metrics to measure visual explanations quality objectively. Through qualitative and quantitative analysis on VinDr-CXR dataset, we determine optimal hyper-parameter settings.

To address instability, we introduce MindfulLIME, which intelligently generates purposeful samples using a graph-based traversal algorithm and uncertainty sampling. MindfulLIME significantly enhances explanation reliability and consistency compared to random sampling. Experimental results on the VinDr-CXR dataset confirm MindfulLIME's exceptional stability of 100\%, along with improved localization precision over LIME. We conduct comprehensive experiments, considering segmentation algorithms and sample numbers, highlighting MindfulLIME's superior performance. By addressing LIME's instability, MindfulLIME enhances the interpretability of machine learning models, especially in critical domains.

Supervisor(s)

Supervisor(s)

SHAKIBA RAHIMIAGHDAM

Date and Location

Date and Location

2023-09-08 10:00:00

Category

Category

PhD_Thesis