Displaying 6 resources

Explainable AI for systems with functional safety requirements
Explainable AI (XAI) is vital for making AI decision-making processes transparent and understandable to human experts, and for ensuring safety and regulatory compliance.

Application of the ALTAI tool to power grids, railway network and air traffic management
This document presents the responses from industry (operators of critical infrastructures) to the Assessment List for Trustworthy AI (ALTAI) questionnaire for three domains and specific use cases: power grid, railway network, and air traffic manageme

Towards functional safety management for AI-based critical systems
The webinar provides attendees with a comprehensive understanding of the challenges and opportunities associated with integrating AI into safety-critical systems.

pygrank
pygrank is an open source framework to define, run and evaluate node ranking algorithms. It provides object-oriented and extensively unit-tested algorithmic components, such as graph filters, post-processors, measures, benchmarks, and online tuning.

InDistill
InDistill enchances the effectiveness of the Knowledge Distillation procedure by leveraging the properties of channel pruning to both reduce the capacity gap between the models and retain the information geometry.

SAFEXPLAIN Introduction to Trustworthy AI for Safety-Critical Systems
This introductory video provides an overview of the steps taken by the SAFEXPLAIN project to ensure that the AI-based solutions used in safety-critical systems are Trustworthy, explainable and comply with the safety guidelines of diverse industrial d