Back in 2019, I wrote my predictions of advancements we'd see in AI in 2020. In one of those predictions, I discussed the perennial problem of algorithmic explainability, or the ability for algorithms to explain themselves, and how that will come to the fore this year. Solving this problem is key to business success, as the general public is becoming increasingly uncomfortable with black-box algorithms.
As transparency is the most prevalent principle in the current AI ethics literature, it is no wonder that explainability, a subset of transparency, is at the tip of the tongue of government officials, technologists, and laypeople alike.
The problem: understanding our data
We’ve all heard the story: This ever-changing world in which we live is fundamentally data-driven. Data experts use sophisticated analytics and data science tools to make meaning from the manifold mounds of data madness that surrounds us.
But with big data comes big responsibility. Algorithmic methods of working with the data, such as machine learning and deep learning (a subset of the former), can produce robust results, allowing an analyst to predict future outcomes based on historical data. Unfortunately, with many of these methods, the results are clear but the reasons and explanations for how the algorithm arrived there are less so. This is largely due to the complicated and complex nature of the algorithms—especially when it comes to deep learning, in which there can be manifold steps or stages between the input and the output.
This problem is compounded by the fact that legal frameworks the world over, such as the General Data Protection Regulation (GDPR), are affording people the right to be given an explanation for an output of an algorithm.
Merry Marwig, G2's privacy and cybersecurity market research analyst, argues:
Article 22 of the EU's GDPR entitled, "Automated individual decision-making, including profiling," addresses AI explainability. The Article's purpose is to give a person the ability to request a human to review a case where AI has made a decision impacting the person's life. For example, if a person applies for a bank loan and is denied, they have a right to ask for the factors that contributed to this outcome. Perhaps it was because they had poor credit or a prior bankruptcy and were deemed too risky to extend a loan. Those are straightforward reasons for a loan denial, but what if the reason was due to myriad data, logic, and computing that cannot be easily explained by a human? This is the trouble a lack of AI explainability poses to companies and the difficulty in complying with GDPR's Article 22.
However, it should be noted that legal scholars such as Sandra Wachter question the scope and extent of the "right to explanation" clause in the GDPR and its application to artificially intelligent algorithmic systems due to the fact that “the GDPR lacks precise language as well as explicit and well-defined rights and safeguards against automated decision-making."
Want to learn more about Data Science and Machine Learning Platforms? Explore Data Science and Machine Learning Platforms products.
The solution: algorithmic explainability
Besides government legislation, we are also seeing how business leaders are listening to the voice of the people who, per an IBM Institute for Business Value survey, are demanding—and will continue to demand—more explainability from AI.
IT professionals realize full well its benefit, especially in a fiscal sense. According to CCS Insight's annual survey of IT decision-makers, transparency into how systems work and are trained is now one of the most important requirements when investing in AI and machine learning, cited by almost 50% of respondents.
With that in mind, it seems like almost a no-regrets move to implement algorithmic explainability into a business’ tech stack, giving stakeholders the ability to peer within the black box and understand the inner workings of the algorithms.
If only it were that easy…
The problem with the solution
As with many idealized solutions, they are not perfect and do not work for all problems. Thus, we have seen three key issues with the expansion and adoption of explainability.
Depth and complexity of the algorithm
Although some machine learning algorithms are more given to explanation due to their relative simplicity, others, such as deep learning, are trickier due to their nested nature and the fact that they may have hidden layers between input and output. That being said, this does not mean that all is lost and that we should give up on our goal of explainable AI. (As the saying goes, if at first you don't succeed, try, try, try again.)
We are seeing a trend in the data science and machine learning platform market where providers are responding to this clarion call and are providing tools to give a user’s algorithm some form of explainability. For example, we have seen tools—such as IBM’s Explainability 360, AWS’ SageMaker Debugger, and Google’s Explainable AI—that are intentionally only tackling a subset of algorithms within a particular use case, with plans to expand in the future.
The AI & machine learning operationalization (MLOps) software space is hot. On G2, we have seen tremendous interest in the category, with traffic growing by 842% in July 2021.
In addition, G2 added an attribute to the AI & Machine Learning Operationalization (MLOps) software category for tools that are geared toward algorithmic explainability. With these tools, data science teams can bake explainability into their data pipeline, helping explain their algorithms and their results to other departments, customers, and more.
Reviewers describe how important explainability is to their data science pipeline. For example, a reviewer for MLOps described:
“ML Observability is the best part of MLOps pipeline. Monitoring the models for Explainability and Drift monitoring with Grafana and Prometheus is more insightful and makes anyone interested to dive deep.”
It's not what you say—it's how you say it
As with any solution, it is key that the explanation is tailored to the user, based on their given skill set, role, and purpose. For example, it might be counterproductive and ineffective to provide an in-depth, data-heavy explanation to a business leader. At the same time, this style of explanation might be well-suited to a data analyst or data scientist who is looking to tweak the algorithm. Some of the tools available on the market are taking this conundrum into consideration and are allowing for the explanation to be tailored to the end user.
Unfortunately, the very act of explaining AI can sometimes have dastardly effects. Too much transparency can lead to information overload.
In a 2018 study looking at how non-expert users interact with machine learning tools, for example, Jennifer Wortman Vaughan, a computer scientist at Microsoft Research, found that transparent models can actually make it harder to detect and correct the model’s mistakes, as creating neural networks that are more transparent can lead us to over-trust them.
As Douglas Heaven, MIT Tech Review, wrote, ”Ultimately, we want AIs to explain themselves not only to data scientists and doctors but to police officers using face recognition technology, teachers using analytics software in their classrooms, students trying to make sense of their social media feeds—and anyone sitting in the backseat of a self-driving car.”
Explainability as a "nice to have" versus a "must have"
Just because something can be explained, does not mean that it should be or must be.
According to Dr. Francesco Corea, research lead at Balderton Capital, AI explainability could create problems in the future.
Dr. Corea’s point reiterates the above problem: Just because something can be explained, doesn’t mean that it should be. In addition, I agree wholeheartedly with the fact that algorithmic explainability must be a key part of the design process and not just an afterthought.
The future
This move toward explainability is exciting for another reason: It moves forward the fight against algorithmic bias. It is a well-known tragedy that, due to a number of factors like biased datasets, algorithms can produce outputs that are biased. One example of this is the fact that many facial recognition systems are better at detecting white faces.
Damon Civin, principal data scientist at Arm, remarked that the push toward explainable AI can help reduce the impact of biased algorithms, “If human operators could check in on the ‘reasoning’ an algorithm used to make decisions about members of high-risk groups, they might be able to correct for bias before it has a serious impact.” Anyone interested in seeing the disastrous and destructive things that can come about as the result of biased data is invited to read Caroline Criado-Pérez's eye-opening book, Invisible Women.
Thankfully, as we mentioned above, we are seeing data science and machine learning platforms begin to bake explainability features into their products. With these capabilities, one can build AI-powered applications and software aimed at delivering transparency. With that in mind, companies will not only be able to tick off an ethical checkbox, but also provide their end users with a responsible product that can be understood by all.
Edited by Sinchana Mistry

Matthew Miller
Matthew Miller is a research and data enthusiast with a knack for understanding and conveying market trends effectively. With experience in journalism, education, and AI, he has honed his skills in various industries. Currently a Senior Research Analyst at G2, Matthew focuses on AI, automation, and analytics, providing insights and conducting research for vendors in these fields. He has a strong background in linguistics, having worked as a Hebrew and Yiddish Translator and an Expert Hebrew Linguist, and has co-founded VAICE, a non-profit voice tech consultancy firm.