Description: Moral Agency refers to the capacity of an entity to act with reference to what is right and wrong, particularly in the context of artificial intelligence (AI) systems. This concept implies that an entity, whether human or artificial, has the ability to make decisions that are not only functional but also consider ethical and moral implications. In the realm of AI, moral agency raises fundamental questions about responsibility and accountability. If an AI system makes decisions that affect people, the question arises as to whether that system can be considered morally responsible for its actions. Moral agency is also related to the ability of AI systems to understand and apply ethical principles, which is crucial in applications where decisions can have significant impacts on human life, such as in healthcare, criminal justice, and autonomous technologies. The discussion of moral agency in AI is essential for the development of technologies that are not only efficient but also respect human values and rights, thus promoting a future where technology and ethics coexist harmoniously.