Description: Trust Models in the context of Explainable AI are conceptual frameworks that enable the evaluation and construction of trust in artificial intelligence systems through their transparency and reliability. These models focus on the ability of AI systems to provide understandable explanations for their decisions and processes, which is essential for users and stakeholders to trust their operation. Transparency refers to the clarity with which an AI system can communicate how and why it makes specific decisions, while reliability implies that the system acts consistently and predictably in various situations. The combination of these elements is essential to foster acceptance and responsible use of AI in critical sectors such as healthcare, justice, and finance. As AI becomes more integrated into everyday life, the need for models that ensure trust becomes increasingly relevant, as users seek assurances that automated decisions are fair, ethical, and based on solid data.