Description: An explanatory model is a type of artificial intelligence system specifically designed to provide clear and understandable explanations for its predictions or decisions. Unlike ‘black box’ models, which operate opaquely and do not allow understanding of how a result is reached, explanatory models aim to break down the decision-making process into logical and accessible steps. This is crucial in contexts where transparency is fundamental, such as in medicine, law, and finance. The main characteristics of these models include the ability to offer reasons behind a prediction, the possibility of identifying relevant features that influence the outcome, and ease of interpretation for non-technical users. The relevance of explanatory models lies in their potential to increase user trust in automated decisions, facilitate the identification of biases and errors, and comply with regulations that require transparency in the use of algorithms. In a world where artificial intelligence plays an increasingly important role, the ability to understand and trust these systems is essential for their adoption and responsible use.