2

When talking about explainability in AI, using the Deductive-Nomological (DN) framework, the explanandum could be the (opaque) algorithm itself or the performance of the algoritm. DN, states that an explanandum can be a phenomenon or a regularity. It is clear that the performance of the algorithm can be seen as a process, thus a phenomenon. But what about the algorithm itself?

1
  • Perhaps the explanandum of both the algorithm itself and DN are the same good old tree of knowledge and the good and bad… Commented Nov 23, 2024 at 21:55

1 Answer 1

1

I work in AI and often have to ensure explainability, so I feel like I can opine here.

Not sure about D-N framework or what have you, but I do know that what is being explained is the specific output from the AI.

A common example is an AI model making a prediction that a particular customer will cancel their subscription (a phenomenon called "churn"). Let's say AI model "M" says customer "C" has a 95% chance of churning in the next 3 months.

The users of this model will want to know why it's saying that. Typically, this is done using SHAP analysis to identify how each feature of this customer led to the probability it gave you.

Some AI models are intrinsically less opaque than others. Deep neural nets and random forests are notoriously hard to decipher due to their complexity wheras decision trees and linear regression are quite straightforward.

There is a new area called Kolmogorov-Arnold Networks, which may help add even more explanability to complex neural network models.

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.