Explainable AI with Gradient Boosting for Early Alzheimer’s Detection through Healthcare Data Mining

Main Article Content

Kavitha R, Praba R

Abstract

Early Alzheimer's disease diagnosis is a major concern in contemporary health care, as it is important in determining the treatment plan promptly and improving the quality of life of patients. The recent study introduces an Explainable Artificial Intelligence (XAI) standard that uses Gradient Boosting algorithms and advanced healthcare data mining approaches to enhance the quality and transparency of predictions for Alzheimer's disease. Unlike traditional machine learning models, this new algorithm includes explainability capabilities, such as SHAP (Shapley Additive Explanations) and feature importance mapping, which provide medical workers with clear decision-making rules. The model reveals the non-linear relationships between risk factors and early onset of Alzheimer's by examining various data in healthcare, such as medical imaging, cognitive test results, electronic health records, and genetic biomarkers. The research demonstrates that the proposed model is better than the current algorithms as it has significantly high prediction accuracy of 96, specificity of 97, precision of 95, and an AUC of 0.94 that could be achieved in a relatively low memory consumption of 1.4 seconds. This trustworthiness not only makes AI-based diagnostic tools more reliable but also helps physicians make data-driven decisions. The possible uses of this framework are enormous, especially in an intelligent healthcare environment, where it can be combined with electronic medical systems and IoT-based patient monitoring to enable real-time risk evaluation. Finally, this paper makes a step forward in precision medicine by integrating data mining, explainable AI, and Gradient Boosting models in the timely detection of Alzheimer's disease.

Article Details

Section
Articles