Representation of Explanations of Possibilistic Inference Decisions
Résumé
In this paper, we study how to explain to end-users the inference results of possibilistic rule-based systems. We formulate a necessary and sufficient condition for justifying by a relevant subset of rule premises the possibility degree of each output attribute value. We apply functions to reduce the selected premises, in order to form two kinds of explanations: the justification and the unexpectedness of the possibility degree of an output attribute value. The justification is composed of possibilistic expressions that are sufficient to justify the possibility degree of the output attribute value.
The unexpectedness is a set of possible or certain possibilistic expressions, which are not involved in the determination of the considered inference result although there may appear to be a potential incompatibility between them and the considered inference result.
We then define a representation of explanations of possibilistic inference decisions that relies on conceptual graphs and may be the input of natural language generation systems. Our extracted justification and unexpectedness are represented by nested conceptual graphs. All our constructions are illustrated with an example of a possibilistic rule-based system that controls the blood sugar level of a patient with type 1 diabetes.
Fichier principal
ECSQARU_camera_ready_2021___Representation_of_Explanations_Of_Possibilistic_Inference_Decisions (2).pdf (503.36 Ko)
Télécharger le fichier
Origine | Fichiers produits par l'(les) auteur(s) |
---|