Displaying 27 resources


AI Fairness 360 (AIF360)
A GitHub repository for AIF360.
The AI Fairness 360 toolkit is an extensible open-source library containing techniques developed by the research community to help detect and mitigate bias in machine learning models throughout the AI application lifec


What-If Tool
A GitHub repository for the What-If Tool.
The What-If Tool (WIT) provides an easy-to-use interface for expanding understanding of a black-box classification or regression ML model.


Fairlearn
Fairlearn is an open-source, community-driven project to help data scientists improve fairness of AI systems.

Explainable AI for systems with functional safety requirements
Explainable AI (XAI) is vital for making AI decision-making processes transparent and understandable to human experts, and for ensuring safety and regulatory compliance.