Description: Joint performance metrics are essential tools in the field of federated learning, an approach that allows for collaborative training of artificial intelligence models without the need to centralize data. These metrics are used to evaluate the performance of models trained across different devices or nodes, ensuring that the quality of learning is maintained despite the diversity of data and training conditions. Through these metrics, aspects such as accuracy, recall, and F1-score can be measured, which are fundamental to understanding how the model behaves in a distributed environment. Additionally, joint performance metrics help identify potential biases in the data and assess the effectiveness of model aggregation strategies, which are crucial for ensuring that the final model is robust and generalizable. In a context where data privacy and security are paramount, these metrics also play a key role by allowing models to be evaluated without needing to access sensitive user data. In summary, joint performance metrics are a critical component for the success of federated learning, providing a way to measure and optimize model performance in a collaborative and decentralized environment.