Displaying 2 resources

Explainable AI for systems with functional safety requirements
Explainable AI (XAI) is vital for making AI decision-making processes transparent and understandable to human experts, and for ensuring safety and regulatory compliance.

An overview of key trustworthiness attributes and KPIs for trusted ML-based systems engineering
When deployed, machine-learning (ML) adoption depends on its ability to actually deliver the expected service safely, and to meet user expectations in terms of quality and continuity of service.