Description: Model bias refers to the systematic error that occurs in an artificial intelligence (AI) model when it produces unfair or disproportionate results. This phenomenon often arises from the quality and nature of the training data used to develop the model. If the training data contains inherent biases, whether due to sample selection, unequal representation of demographic groups, or the inclusion of stereotypes, the model will learn and replicate those biases in its predictions. This can lead to discriminatory decisions in critical applications such as hiring, loan granting, or criminal justice. Ethics in AI becomes a central issue, as model bias not only affects the accuracy of results but also raises serious concerns about fairness and social justice. Identifying and mitigating model bias is essential to ensure that AI technologies are responsible and benefit all sectors of society. In this context, it becomes evident that transparency in development processes and diversity in datasets are fundamental to addressing this ethical challenge.