Displaying 17 resources

Explainable AI for systems with functional safety requirements
Explainable AI (XAI) is vital for making AI decision-making processes transparent and understandable to human experts, and for ensuring safety and regulatory compliance.

Don’t ask if AI is good or fair, ask how it shifts power
Opinion piece by Pratyusha Kalluri in Nature

An overview of key trustworthiness attributes and KPIs for trusted ML-based systems engineering
When deployed, machine-learning (ML) adoption depends on its ability to actually deliver the expected service safely, and to meet user expectations in terms of quality and continuity of service.

Cooperating with machines
Since Alan Turing envisioned artificial intelligence, technical progress has often been measured by the ability to defeat humans in zero-sum encounters (e.g., Chess, Poker, or Go).