Description: The justification of AI decisions refers to the process of explaining and validating the decisions made by artificial intelligence systems. This concept is fundamental in the development and implementation of AI systems, as it allows users and developers to understand how and why certain conclusions or recommendations have been reached. Transparency in AI decisions is crucial, especially in critical applications such as medicine, justice, and finance, where decisions can have a significant impact on people’s lives. Justifying AI decisions not only helps build trust in these systems but is also essential for identifying and correcting biases or errors in algorithms. As AI becomes more integrated into society, the need for clear and accessible justification becomes increasingly relevant, driving research into explainability techniques and the creation of regulatory frameworks that demand greater transparency in the use of artificial intelligence.