Displaying 14 resources

Explainable AI for systems with functional safety requirements
Explainable AI (XAI) is vital for making AI decision-making processes transparent and understandable to human experts, and for ensuring safety and regulatory compliance.

Don’t ask if AI is good or fair, ask how it shifts power
Opinion piece by Pratyusha Kalluri in Nature

Is ethical AI possible?
An interview with Timnit Gebru, the founder of the Distributed AI Research Institute.

Pygmalion Displacement: When Humanising AI Dehumanises Women
Paper exploring the relationship between women and AI.

Generative AI and Research Integrity
The article critically reviews the use of generative AI is research.

An overview of key trustworthiness attributes and KPIs for trusted ML-based systems engineering
When deployed, machine-learning (ML) adoption depends on its ability to actually deliver the expected service safely, and to meet user expectations in terms of quality and continuity of service.