Artificial intelligence is often presented as a ‘black box’ providing results that defy human understanding. This need not be the case as Bastien Zimmermann (Craft AI) explained at the 2022 Open Source Experience Paris event. There are several libraries of open source software that promise ‘explainable AI’ (XAI), providing methods that make machine-learning derived results understandable. Zimmermann presented some of the main XAI libraries in terms of their target areas and code maturity. XAI solutions range from single developer solutions of uncertain scope to enterprise-strength environments.
Christoph Molnar, high priest of XAI, describes the approach as ‘adding methods to a black box so it is understandable’. As an example, an ML model to identify dogs may come unstuck when shown a wolf. XAI would then add an ‘explanation’ to the training set to help the algorithm along. This is not as trivial as it might sound. AI safety is becoming an important legal field. In the EU, the AI Act and new RGPD rules mandate a ‘coherent explanation’ of how an AI system works.
The Shap library is behind most XAI. Shapley additive explanations is described as a ‘game theoretic’ approach to explain the output of a machine learning model. Shapely values indicate which input features are responsible for model output. The best XAI library is Alibi Explain. Users can create and embed counterfactual explanations in a model (‘a wolf is not a dog’). For interactivity, Zimmermann recommends the Explainer Dashboard with ‘quick and easy explainers for different methods’. But users need to evaluate XAI libraries in terms of code maintenance, documentation and features. ‘Lots of tools fail in one domain or another’. Github metrics are often a good indicator. In the Q&A, Zimmermann allowed himself a plug for his own company’s Seldon.io MLops for explainable AI offering. The Captum model interpretability library for PyTorch also got a plug.
Watch Zimmermann’s talk on YouTube.
Thomas Moreau explained how French ITC behemoth Thales Group has deployed an ‘Inner Source’ community to leverage open source software in its projects. Inner Source, the integration of open source software into professional IT projects, is a concept that was originally promoted by Tim O’Reilly* over 20 years ago and is now reported to be used at many large companies including Paypal, Siemens, Bosch, Engie and even Microsoft. Thales uses Inner Source to avoid in-house silos by building reusable code building blocks. This has involved a cultural transformation to encourage code sharing across the company, an approach that has led to the ‘serendipitous finding of something good without looking for it’.
The Thales Inner Source TISS stack is built on three pillars: collaborative code development, a legal framework for deployment (with inspiration from the Eclipse Foundation) and community. TISS has governance boards and entices contributors with goodies and other incentives. Meetups and hackathons also ran. Open source components leveraged by TISS include Sonar Qube, Gitlab and components from Jfrog. Public-facing products from TISS includes the Cryptobox and Citadel (secure communications and document management) from Thales’ Ercom unit. Thales also contributes to the Eclipse Project Capella, an open source solution for model-based systems engineering.
Watch Moreau’s presentation (in French) on YouTube.
More from Open Source Experience.
© Oil IT Journal - all rights reserved.