SAFEXPLAIN

The artificial intelligence (AI) needed for complex autonomous tasks like self-driving cars depends on deep learning techniques. However, safety requirements mean that such techniques must also be explainable and traceable. The EU-funded SAFEXPLAIN project plans to solve this issue by creating new explainable deep learning solutions with end-to-end traceability that comply with functional safety requirements for critical autonomous AI-based systems while preserving high performance. Project work will include novel approaches to explain whether predictions can be trusted, and new strategies to prove correct operations. The project consists of a collaboration between three eminent European research centres and will conduct three case studies in the automotive, space and railway sectors.