Loading

The Impact of GHG Emissions on Human Health and its Environment using XAI
Stanley Ziweritin1, David Waheed Idowu2

1S. Ziiweritin, Department of Estate Management and Valuation, Akanu Ibiam Federal Polytechnic, Unwana-Afikpo, Nigeria.

2I. D. Waheed, Department of Computer Science, University of Portharcourt, Nigeria. 

Manuscript received on 20 July 2024 | Revised Manuscript received on 26 July 2024 | Manuscript Accepted on 15 September 2024 | Manuscript published 30 September 2024 | PP: 7-14 | Volume-13 Issue-3, September 2024 | Retrieval Number: 100.1/ijrte.C814013030924 | DOI: 10.35940/ijrte.C8140.13030924

Open Access | Editorial and Publishing Policies | Cite | Zenodo | OJS | Indexing and Abstracting
© The Authors. Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP). This is an open access article under the CC-BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)

Abstract: Explainable AI(XAI) is a revolutionary concept in artificial intelligence that supports professionals in creating trust between people in the decisions of learning models. Greenhouse gases produced in the atmosphere are driving our weather to become more irregular and intense. This poses a threat to human health and impacts crops and plants. XAI techniques remain popular, but they cannot disclose system behaviour in a manner that facilitates analysis. Predicting greenhouse gas (GHG) emissions and their impact on human health is an essential aspect of monitoring emission rates by industries and other sectors. However, a handful of investigations have been used to examine the collective effect of sectors such as construction, transportation, and CO2, among others, on emission patterns. This research addresses a knowledge gap by presenting an explainable machine learning model. This framework employed a random forest classifier combined with two different explainable AI methodologies to give insights into the viability of the proposed learning model. The goal is to use XAI in determining the impact of GHG emissions on humans and the environment. A quantitative survey was conducted to investigate the possibilities of deciding GHG emission rates more accurately. We created a random forest model, trained on GHG emission data using SHAP and LIME techniques. This helped provide local and global explanations on model sample order by similarity, output value, and original sample ranking. The model yielded high accuracy and enhanced interpretability with XAI, enabling decision-makers to comprehend what the AI system truly indicates. LIME exceeded SHAP in terms of comprehension and satisfaction. In terms of trustworthiness, SHAP surpassed LIME.

Keywords: LIME, SHAP, Random Forest, Explainable AI, interpretability
Scope of the Article: Computer Science and Applications