Description: Proportionality in the ethics of artificial intelligence (AI) refers to the principle that actions and decisions taken in the development and implementation of AI systems must be appropriate and balanced in relation to the risks they entail. This concept implies that when designing and applying AI technologies, it is essential to assess the potential impacts and consequences of these technologies, ensuring that the measures adopted are proportional to the expected benefits and identified risks. Proportionality seeks to avoid disproportionate or excessive responses to situations that can be managed more equitably. This principle is essential for fostering trust in AI, as it promotes a responsible and ethical approach to its use, ensuring that automated decisions are not only effective but also fair and equitable. In a context where AI can influence critical aspects of human life, such as health, safety, and privacy, proportionality becomes a fundamental pillar for the governance and regulation of these technologies, ensuring that they are used in a manner that respects human rights and values.