OpenFL-XAI: Federated learning of explainable artificial intelligence models in Python

Mattia Daole, Alessio Schiavo, José Luis Corcuera Bárcena, Pietro Ducange, Francesco Marcelloni, Alessandro Renda

Producción científica: Contribución a una revistaArtículorevisión exhaustiva

5 Citas (Scopus)

Resumen

Artificial Intelligence (AI) systems play a significant role in manifold decision-making processes in our daily lives, making trustworthiness of AI more and more crucial for its widespread acceptance. Among others, privacy and explainability are considered key requirements for enabling trust in AI. Building on these needs, we propose a software for Federated Learning (FL) of Rule-Based Systems (RBSs): on one hand FL prioritizes user data privacy during collaborative model training. On the other hand, RBSs are deemed as interpretable-by-design models and ensure high transparency in the decision-making process. The proposed software, developed as an extension to the Intel® OpenFL open-source framework, offers a viable solution for developing AI applications balancing accuracy, privacy, and interpretability.

Idioma originalInglés
Número de artículo101505
PublicaciónSoftwareX
Volumen23
DOI
EstadoPublicada - jul. 2023
Publicado de forma externa

Huella

Profundice en los temas de investigación de 'OpenFL-XAI: Federated learning of explainable artificial intelligence models in Python'. En conjunto forman una huella única.

Citar esto