Decoding Energy Usage Predictions: An Application of XAI Techniques for Enhanced Model Interpretability

Gregorius Airlangga

Abstract


The growing complexity of machine learning models has heightened the need for interpretability, particularly in applications impacting resource management and sustainability. This study addresses the challenge of interpreting predictions from sophisticated machine learning models used for building energy consumption predictions. By leveraging Explainable AI (XAI) techniques, including Permutation Importance, SHapley Additive exPlanations (SHAP), and Local Interpretable Model-Agnostic Explanations (LIME), we have dissected the predictive features influencing building energy usage. Our research delves into a dataset consisting of various building characteristics and weather conditions, applying an XGBoost model to predict Site Energy Usage Intensity (Site EUI). The Permutation Importance method elucidated the global significance of features across the dataset, while SHAP provided a dual perspective, revealing both the global importance and local impact of features on individual predictions. Complementing these, LIME offered rapid, locally focused interpretations, showcasing its utility for instances where immediate insights are essential. The findings indicate that 'Energy Star Rating', 'Facility Type', and 'Floor Area' are among the top predictors of energy consumption, with environmental factors also contributing to the models' decisions. The application of XAI techniques yielded a nuanced understanding of the model's behavior, enhancing transparency and fostering trust in the predictions. This study contributes to the field of sustainable energy management by demonstrating the application of XAI for insightful model interpretation, reinforcing the significance of interpretable AI in the development of energy policies and efficiency strategies. Our approach exemplifies the balance between predictive accuracy and the necessity for model transparency, advocating for the continued integration of XAI in AI-driven decision-making processes.


Keywords


Energy Usage; Predictions; SHAP LIME; XAI; XGBoost

References


Y. Himeur et al., “A survey of recommender systems for energy efficiency in buildings: Principles, challenges and prospects,” Inf. Fusion, vol. 72, pp. 1–21, 2021.

S. E. Bibri and J. Krogstie, “Environmentally data-driven smart sustainable cities: Applied innovative solutions for energy efficiency, pollution reduction, and urban metabolism,” Energy Informatics, vol. 3, pp. 1–59, 2020.

S. E. Bibri and J. Krogstie, “A novel model for data-driven smart sustainable cities of the future: A strategic roadmap to transformational change in the era of big data,” 2021.

C. W. Chau and E. M. Gerber, “On Hackathons: A Multidisciplinary Literature Review,” in Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, 2023, pp. 1–21.

E. Maemura, “Data Here and There: Studying Web Archives Research Infrastructures in Danish and Canadian Settings,” University of Toronto (Canada), 2021.

A. Gaiba, “Demoicratic catalysts: digital technology and institutional change in the Conference on the Future of Europe,” European University Institute, 2022.

V. Masson et al., “City-descriptive input data for urban climate models: Model requirements, data sources and challenges,” Urban Clim., vol. 31, p. 100536, 2020.

O. Bin, T. W. Crawford, J. B. Kruse, and C. E. Landry, “Viewscapes and flood hazard: Coastal housing market response to amenities and risk,” Land Econ., vol. 84, no. 3, pp. 434–448, 2008.

R. L. Ciurean et al., “Multi-scale debris flow vulnerability assessment and direct loss estimation of buildings in the Eastern Italian Alps,” Nat. hazards, vol. 85, pp. 929–957, 2017.

A. B. Arrieta et al., “Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI,” Inf. fusion, vol. 58, pp. 82–115, 2020.

V. Chamola, V. Hassija, A. R. Sulthana, D. Ghosh, D. Dhingra, and B. Sikdar, “A review of trustworthy and explainable artificial intelligence (xai),” IEEE Access, 2023.

T. Speith, “A review of taxonomies of explainable artificial intelligence (XAI) methods,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 2239–2250.

T. Saroglou, I. A. Meir, T. Theodosiou, and B. Givoni, “Towards energy efficient skyscrapers,” Energy Build., vol. 149, pp. 437–449, 2017.

Z. Kang, Improving Energy Efficiency Performance of Existing Residential Building in Northern China. Rochester Institute of Technology, 2019.

T. Alves, L. Machado, R. G. de Souza, and P. de Wilde, “Assessing the energy saving potential of an existing high-rise office building stock,” Energy Build., vol. 173, pp. 547–561, 2018.

R. Guidotti, A. Monreale, S. Ruggieri, F. Turini, F. Giannotti, and D. Pedreschi, “A survey of methods for explaining black box models,” ACM Comput. Surv., vol. 51, no. 5, pp. 1–42, 2018.

N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. B. Celik, and A. Swami, “Practical black-box attacks against machine learning,” in Proceedings of the 2017 ACM on Asia conference on computer and communications security, 2017, pp. 506–519.

A. J. London, “Artificial intelligence and black-box medical decisions: accuracy versus explainability,” Hastings Cent. Rep., vol. 49, no. 1, pp. 15–21, 2019.

J. Borrego-D’iaz and J. Galán-Páez, “Explainable Artificial Intelligence in Data Science: From Foundational Issues Towards Socio-technical Considerations,” Minds Mach., vol. 32, no. 3, pp. 485–531, 2022.

D. Diepgrond, “Can prediction explanations be trusted? On the evaluation of interpretable machine learning methods,” 2020.

V. Palmisano, “Responsible Artificial Intelligence for Critical Decision-Making Support: A Healthcare Scenario,” Politecnico di Torino, 2022.

S. Hariharan, R. R. Rejimol Robinson, R. R. Prasad, C. Thomas, and N. Balakrishnan, “XAI for intrusion detection system: comparing explanations based on global and local scope,” J. Comput. Virol. Hacking Tech., vol. 19, no. 2, pp. 217–239, 2023.

G. Fidel, R. Bitton, and A. Shabtai, “When explainability meets adversarial learning: Detecting adversarial examples using shap signatures,” in 2020 international joint conference on neural networks (IJCNN), 2020, pp. 1–8.

H. T. T. Nguyen, H. Q. Cao, K. V. T. Nguyen, and N. D. K. Pham, “Evaluation of explainable artificial intelligence: Shap, lime, and cam,” in Proceedings of the FPT AI Conference, 2021, pp. 1–6.

Y. Wang and Y. Guo, “Forecasting method of stock market volatility in time series data based on mixed model of ARIMA and XGBoost,” China Commun., vol. 17, no. 3, pp. 205–221, 2020.

K. Liu, X. Xu, R. Zhang, L. Kong, W. Wang, and W. Deng, “Impact of urban form on building energy consumption and solar energy potential: A case study of residential blocks in Jianhu, China,” Energy Build., vol. 280, p. 112727, 2023.

J. Alzubi, A. Nayyar, and A. Kumar, “Machine learning from theory to algorithms: an overview,” in Journal of physics: conference series, 2018, vol. 1142, p. 12012.

V. Moustaka, Z. Theodosiou, A. Vakali, A. Kounoudes, and L. G. Anthopoulos, “Enhancing social networking in smart cities: Privacy and security borderlines,” Technol. Forecast. Soc. Change, vol. 142, pp. 285–300, 2019.




DOI: http://dx.doi.org/10.24014/ijaidm.v7i2.29041

Refbacks

  • There are currently no refbacks.


Office and Secretariat:

Big Data Research Centre
Puzzle Research Data Technology (Predatech)
Laboratory Building 1st Floor of Faculty of Science and Technology
UIN Sultan Syarif Kasim Riau

Jl. HR. Soebrantas KM. 18.5 No. 155 Pekanbaru Riau – 28293
Website: http://predatech.uin-suska.ac.id/ijaidm
Email: ijaidm@uin-suska.ac.id
e-Journal: http://ejournal.uin-suska.ac.id/index.php/ijaidm
Phone: 085275359942

Click Here for Information


Journal Indexing:

Google Scholar | ROAD | PKP Index | BASE | ESJI | General Impact Factor | Garuda | Moraref | One Search | Cite Factor | Crossref | WorldCat | Neliti  | SINTA | Dimensions | ICI Index Copernicus 

IJAIDM Stats