Description: Intelligibility in the context of artificial intelligence (AI) refers to the quality of being understandable, especially concerning AI models. This characteristic is crucial to ensure that users can interpret and trust the decisions made by these systems. Intelligibility implies that models not only produce results but also explain how and why they reached those conclusions. This is particularly important in critical applications, such as healthcare or legal systems, where automated decisions can significantly impact people’s lives. Intelligibility is closely related to transparency and interpretability, and its goal is to make AI models accessible to non-technical users, allowing for a better understanding of their internal workings. As AI becomes more integrated into society, the need for systems that are not only effective but also comprehensible becomes increasingly evident. Intelligibility not only enhances user trust but also facilitates the identification of biases and errors in models, promoting a more ethical and responsible use of technology.