Displaying 2 resources

Explainable AI for systems with functional safety requirements
Explainable AI (XAI) is vital for making AI decision-making processes transparent and understandable to human experts, and for ensuring safety and regulatory compliance.

Designing a Rights-Based Global Index on Responsible AI
A frameworks have been developed that set out core ethical principles to be upheld as the technology is designed, developed, used, and evaluated.