Description: Post-hoc explanation is an approach used in the field of explainable artificial intelligence that refers to the provision of justifications or clarifications about the decisions made by a model after it has made its predictions. This type of explanation seeks to break down and make understandable the internal processes of the model, allowing users to understand why a specific conclusion was reached. Often, these explanations are crucial in contexts where automated decisions can have a significant impact, such as in healthcare, criminal justice, or the financial sector. Post-hoc explanations may include identifying relevant features that influenced the decision, as well as visualizing patterns in the data that led to the prediction. This approach is especially valuable in complex models, such as deep learning algorithms, where the opacity of the model can hinder understanding of its operation. By providing a narrative that accompanies the model’s decision, the aim is to increase user trust in the system and facilitate the identification of potential biases or errors in predictions.