Description: The ‘lack of transparency’ in the context of artificial intelligence (AI) refers to the lack of clarity in the processes and algorithms underlying the decisions made by these systems. This opacity can generate distrust among users and stakeholders, as they do not understand how certain conclusions or recommendations are reached. Lack of transparency can manifest in various forms, such as the complexity of AI models, which are often considered ‘black boxes’, where inputs and outputs are visible, but the internal process is incomprehensible. This situation raises serious ethical concerns, especially in critical applications such as criminal justice, healthcare, and hiring, where decisions can significantly impact people’s lives. Lack of transparency can lead to unintended biases, discrimination, and lack of accountability, highlighting the need to develop AI systems that are more explanatory and accessible. In a world where AI is increasingly integrated into decision-making processes across various sectors, transparency becomes an essential requirement to foster trust and ensure that these technologies are used ethically and fairly.