
Roy Saurabh Highlights Federated Learning as Key to a More Ethical and Secure AI in His Talk at CITIC of UDC
- The UNESCO advisor emphasized the importance of data governance and responsible AI systems.
- Saurabh stressed the need to protect individual privacy, particularly that of the most vulnerable groups, who are especially susceptible to the misuse and exploitation of their data.
A Coruña, April 10, 2025.- In his lecture this Thursday at CITIC of the University of A Coruña (UDC), Roy Saurabh, Senior Advisor to UNESCO specializing in AI ethics, focused on how the integration of federated learning could revolutionize the handling of sensitive data while enhancing security and privacy in key sectors such as health and education.
In his talk titled «Integrating Federated Learning with Data Governance Frameworks for Collaborative and Secure Analysis of Sensitive Data» Saurabh explained that one of the greatest challenges for AI systems is handling personal data ethically. Despite the growing demand for predictive models to tackle complex issues such as well-being, school dropout, or social exclusion, the collection and use of personal data raise serious concerns around privacy and security.
«The challenge is not whether personal data should be used, but how to use it responsibly under strict governance frameworks», Saurabh emphasized, underlining the need to protect individual privacy—especially for vulnerable groups such as at-risk children, people with disabilities, or migrant communities, who are particularly exposed to data misuse and exploitation.
The Impact of Federated Learning on Privacy
The core concept of the talk was federated learning, an AI technique that enables training models without transferring original data to central servers. «Federated learning allows AI models to be trained across multiple institutions while keeping data local, which reduces the risk of leaks and keeps the data in its original context,»Saurabh explained.
Using an example from his work in adolescent well-being, he detailed how this approach was applied to train predictive models using biometric, behavioral, and environmental data: «Data is never centralized. Instead, models are trained locally, and only encrypted updates are shared. This ensures sensitive data remains protected while still generating valuable insights,» he added.
Saurabh also emphasized that this method helps comply with data protection regulations and respects users’ decisions on how and why their information is used.
Ethics and Governance in AI: An Urgent Need
Saurabh stressed that AI cannot move forward without a strong ethical framework, and in this context, public policy plays a crucial role, particularly in sensitive sectors like education, health, or welfare.«Public institutions must manage AI responsibly, establishing strict controls over models, data, and transparency throughout the process,»he said.
He also highlighted the importance of striking a balance between individual rights and scientific research, arguing that regulations should not be seen as an obstacle but as a way to ensure innovations in AI are developed sustainably and fairly.«Regulation does not limit innovation—it guides it to evolve in a way that respects societal rights and values,» he affirmed.
Commitment to Responsible AI
Roy Saurabh concluded by emphasizing that ethical AI is not an abstract idea, but something that must be implemented through system architecture and continuously monitored. «AI ethics must be embedded from the design stage—not only through guidelines, but with technical mechanisms that ensure compliance with regulations and ethical principles,» he concluded.
The conference, organized by CITIC of UDC and the Integrated Engineering Group (GII), reaffirmed the scientific and technological community’s commitment to the development of responsible AI—one that protects privacy, ensures data governance, and promotes equity in the use of emerging technologies.