专题:Explainable Artificial Intelligence (XAI)

This cluster of papers focuses on Explainable Artificial Intelligence (XAI) and the development of interpretable models, visual explanations, and responsible machine learning interpretability. It explores concepts, challenges, and opportunities in XAI, including the use of gradient-based localization, understanding deep neural networks, feature importance, and addressing black box models. The papers also discuss the responsibility and ethical considerations in AI.
最新文献
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)

preprint Full Text OpenAlex

Generative AI lacks the human creativity to achieve scientific discovery from scratch

article Full Text OpenAlex

Understanding the synergy of energy storage and renewables in decarbonization via random forest-based explainable AI

article Full Text OpenAlex

Dual purpose of Shapley Additive Explanation (SHAP) in model explanation and feature selection for artificial intelligence-based digital twin of wastewater treatment plant

article Full Text OpenAlex

Ethical and regulatory challenges of Generative AI in education: a systematic review

review Full Text OpenAlex

Inevitable challenges of autonomy: ethical concerns in personalized algorithmic decision-making

article Full Text OpenAlex

Explainable machine learning techniques for hybrid nanofluids transport characteristics: an evaluation of shapley additive and local interpretable model-agnostic explanations

article Full Text OpenAlex

Vulnerability detection using BERT based LLM model with transparency obligation practice towards trustworthy AI

article Full Text OpenAlex

SHAP-Instance Weighted and Anchor Explainable AI: Enhancing XGBoost for Financial Fraud Detection

article Full Text OpenAlex

Trust, Explainability and AI

article Full Text OpenAlex

近5年高被引文献
Generative Adversarial Nets

book-chapter Full Text OpenAlex 19788 FWCI9640.75155021

On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)

preprint Full Text OpenAlex 12887 FWCI3789.87313194

Training language models to follow instructions with human feedback

preprint Full Text OpenAlex 4214 FWCI0

ChatGPT: five priorities for research

article Full Text OpenAlex 1603 FWCI83.55662468

Artificial intelligence: A powerful paradigm for scientific research

review Full Text OpenAlex 1357 FWCI131.61779513

A Brief Overview of ChatGPT: The History, Status Quo and Potential Future Development

article Full Text OpenAlex 1185 FWCI302.44437897

Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial Intelligence

article Full Text OpenAlex 1113 FWCI284.30793395

Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications

review Full Text OpenAlex 1112 FWCI116.9748638

The false hope of current approaches to explainable artificial intelligence in health care

review Full Text OpenAlex 1072 FWCI97.50257043

Interpreting Black-Box Models: A Review on Explainable Artificial Intelligence

review Full Text OpenAlex 1069 FWCI273.06844689