Description: Explainable decision trees are artificial intelligence models designed to be interpretable and provide clear reasoning for the decisions they make. Unlike more complex models, such as neural networks, decision trees are structured as a diagram that represents decisions and their possible consequences. Each node of the tree represents a question about a feature of the dataset, and each branch represents the outcome of that question, leading to additional nodes that continue the decision-making process. This hierarchical structure allows users to follow the decision path that leads to a conclusion, facilitating the understanding of the model. Interpretability is crucial in many fields, especially in those where automated decisions can have significant impacts, such as in healthcare, finance, and law. Explainable decision trees not only help experts validate and trust the results, but also allow non-experts to understand the reasoning behind decisions, promoting transparency and accountability in the use of artificial intelligence.