XAIB

eXplainable Artificial Intelligence Benchmark

Started: Oct 2022

XAIB is an open benchmark that provides a way to compare different XAI methods using broad set of metrics that were built to measure different aspects of interpretability. Compared to analogous solutions, XAIB is more universal, easier to extend, and has a complete ontology in the form of the Co-12 Framework, which provides a basis for future research.
The structure of the benchmark and the properties of the quality metrics allow evaluation of different types of explainers in any real- world setting, which brings evaluation closer to practice.