PMsquare

View Original

AI Ethics & Explainability for IBM Analytics Practitioners

In Gartner's latest Magic Quadrant for Data Science and Machine Learning Platforms, IBM finds itself squarely in the topmost, rightmost portion, the Leaders quadrant. Products like IBM Cloud Pak for Data (CP4D) and IBM Watson Studio are big reasons why. Strategically, more and more CP4D and Watson Studio derived AI components have been making their way into IBM's analytics and planning products. 

As a matter of fact, with recent releases, IBM Cognos Analytics and IBM Planning Analytics themselves have actually become part of the IBM Watson product family. Makes sense. AI is meant to augment human intelligence and facilitate decision making, so a natural fit. With that, and the inevitable, ever-tighter AI integration and infusion each product has and will surely continue to see, we analytics practitioners need to understand the AI pillars and explainability principles that IBM espouses. These principles should be understood initially by CA and PA developers, administrators, and product owners, as they will inform the future direction of the AI components of each core product. 

Additionally, and most importantly, it will be necessary for doers in our space to use the explainability toolkit in order to explain why models (and integrated, downstream components) predict/project/suggest/decide what they do to our business partners. 

In this brief post, we'll go over some of the particulars that will serve as a foundation meant to help you explain IBM's AI tenets and methods.

From IBM directly… Explainable artificial intelligence (XAI) is a set of processes and methods that allows human users to comprehend and trust the results and output created by machine learning algorithms. Explainable AI is used to describe an AI model, its expected impact and potential biases. It helps characterize model accuracy, fairness, transparency, and outcomes in AI-powered decision making. Explainable AI is crucial for an organization in building trust and confidence when putting AI models into production. AI explainability also helps an organization adopt a responsible approach to AI development. As AI becomes more advanced, humans are challenged to comprehend and retrace how the algorithm came to a result. The whole calculation process is turned into what is commonly referred to as a “black box" that is impossible to interpret. These black box models are created directly from the data. And, not even the engineers or data scientists who create the algorithm can understand or explain what exactly is happening inside them or how the AI algorithm arrived at a specific result.

There are many advantages to understanding how an AI-enabled system has led to a specific output. Explainability can help developers ensure that the system is working as expected, it might be necessary to meet regulatory standards, or it might be important in allowing those affected by a decision to challenge or change that outcome.

To those ends, the AI ethics pillars that IBM has established will help guide their creation and delivery of AI products. AI has the potential to affect the lives of billions of people and the potential to transform society. It's no wonder that so much thought has been given to the ethics necessary to employ such technology (if only the advent of social media tech had brought similar consideration!). Other leaders and advocates in the AI space have considered and adopted guiding principles very similar to IBM. IBM's stated AI ethics pillars are as follows:

  • Explainability - Good design does not sacrifice transparency in creating a seamless experience.

  • Fairness - Properly calibrated, AI can assist humans in making fairer choices.

  • Robustness - As systems are employed to make crucial decisions, AI must be secure and robust.

  • Transparency - Transparency reinforces trust, and the best way to promote transparency is through disclosure.

  • Privacy - AI systems must prioritize and safeguard consumers’ privacy and data rights. 

Only by embedding ethical principles into AI applications and processes can we build systems based on trust.

The open source AI Explainability 360 toolkit (created by IBM Research and donated to the Linux foundation AI & Data) offers very useful IBM explainability algorithms, demos, tutorials, and guides to help us build more transparent and trustworthy AI.  In the toolkit, you can find:

  • Tutorials for credit approvals, employee retention, medical expenditure, and others 

  • Common AI algorithms' Python code for methods such as Boolean decision rules via column generation, generalized linear rules models, and contrastive explanation methods (amongst others)

  • Proposals for quantitative metrics for quality of explanations  

  • Demos by persona for process of explaining models to consumers

  • Whitepapers

  • Directory of Jupyter notebooks that provide working examples of explainability in sample datasets

  • An open Slack channel for AI Explainability 360

Key AI ethics groups include:

We analytics practitioners should consider these tenets and should ensure that they always make their way into our AI based decisions and predictions. We should, also, use these toolkit resources to become familiar with explaining AI models and results. The better we do it, the more our organizations can feel comfortable benefiting from the potentially transformational suggestions that our AI models and components can produce.

Next Steps

We hope you found this article informative. Be sure to subscribe to our newsletter for data and analytics news, updates, and insights delivered directly to your inbox.

If you have any questions or would like PMsquare to provide guidance and support for your analytics solution, contact us today.

See this gallery in the original post