XAI Meets Heart Disease: How AI Explains Itself Like a Doctor

Explainable AIpython

Wednesday, July 30, 2025

Introduction

In the age of artificial intelligence, we’re often told that machines can make better predictions than humans. But when it comes to healthcare, it’s not enough for an AI model to just be accurate it needs to be understandable. If a doctor can’t interpret how a model reached a diagnosis, can we really trust it? This is where Explainable AI (XAI) steps in.

In this project, exploration was done on how machine learning models like decision trees and logistic regression, combined with tools like SHAP (SHapley Additive Explanations), can provide transparency in healthcare predictions. More specifically, a heart disease prediction model was built to analyze whether the model exhibits patterns similar to human cognitive biases such as the framing effect and risk preferences, commonly observed in clinical decision-making.

Problem: Predicting Heart Disease — But Transparently

Heart disease is one of the leading causes of death globally. Early detection can save lives, but diagnosis is often complex and influenced by many factors age, cholesterol, blood pressure, and more.

A machine learning model was trained to predict whether a person has heart disease using the classic UCI Heart Disease Dataset. Rather than optimizing purely for accuracy, the focus was placed on interpretability by using models such as:

  • Logistic Regression
  • Decision Trees

Both are transparent enough to audit unlike deep neural networks.

Enter SHAP: The AI Stethoscope

To understand how the model makes decisions, SHAP (SHapley Additive Explanations)was used. SHAP assigns each feature (like cholesterol or age) a “contribution score” toward the prediction.

Think of it like this: the model is saying, “Your high cholesterol added 15% to your risk prediction, but your normal resting heart rate reduced it by 5%.”

This lets doctors and data scientists see how and why a decision was made.

SHAP in Action

Let’s look at a real example:

Looking at this SHAP summary plot, the most influential features for the model's predictions are chest pain type, major vessel involvement, and ST depression (oldpeak), which show the largest impact on model output magnitude.

You can also visualize individual explanations:

Here, the model shows exactly why it classified this patient as high risk almost like a doctor explaining their diagnosis line by line

What the Model Revealed

Once the data was balanced to give equal attention to both healthy and at-risk patients, the Decision Tree model performed impressively. It reached 83% accuracy, and more importantly, it had a recall of 86% for patients with heart disease. In plain terms? The model was good at spotting who was actually at risk which is exactly what you want in a medical setting.

But the real insights came when one looks under the hood using SHAP a tool that helps explain how each input feature (like cholesterol or age) influences the model’s predictions.

Spotting Human-Like Patterns

What we found was fascinating.

  • Framing-Like Effects: The model reacted sharply when certain inputs were just below the risk threshold like a blood pressure of 139 mmHg instead of 140. This small shift led to big differences in the model’s predictions. It’s very similar to how doctors sometimes react more strongly to how information is framed, even if the difference is tiny.
  • Risk Sensitivity: The model also showed signs of being overly cautious. In situations where the data was unclear or borderline, it often leaned toward predicting that the patient did have heart disease. That’s a pattern we also see in real-life clinical decisions doctors playing it safe when there’s uncertainty.

Now, this doesn’t mean the model is actually thinking like a human. But it does suggest that the patterns it learned reflect the real-world choices and cautious instincts of the doctors who labeled the data.

In other words, our AI model didn’t just learn what the medical textbooks say it learned how doctors actually behave.

Why This Matters

In medicine, transparency can be a matter of life and death. Doctors don’t just want a diagnosis they want to know why it’s being made, especially when it challenges their clinical intuition. Patients deserve the same.

Explainable AI gives us the tools to bridge this gap. By using SHAP and other methods, we can uncover the logic behind AI models and spot when their decision paths resemble and potentially reinforce known human biases.

This is about more than trust. It’s about accountability, ethics, and building models that don’t just perform well, but perform responsibly.

Credits