Skip to content
  • Publications
  • Jobs
  • enzuazua
  • Seminars
  • Events Calendar
cmc.deusto.eus
  • Home
  • About us
    • About the Chair
    • Head of the Chair
    • Team
    • Past Members
  • Research
    • Projects
    • ERC CoDeFeL
    • Computational Mathematics Research Group
    • DyCon Blog
    • DyCon Toolbox
    • Industrial & Social TransferenceContents related to the industrial and social transference aspects of the work in the Chair of Computational Mathematics.
  • Publications
    • Publications (All)
    • Publications by year
      • Publications 2025
      • Publications 2024
      • Publications 2023
      • Publications 2022
      • Publications 2021
      • Publications 2020
      • Publications 2019
      • Publications 2018
      • Publications 2017
      • Publications 2016
    • AcceptedAccepted to be released
    • SubmittedSubmitted publications
  • Activities
    • Events calendar
    • Past Events
    • News
    • Seminars
    • Courses
    • enzuazua
    • Gallery
  • Jobs
  • Contact

FedADMM-InSa: An Inexact and Self-Adaptive ADMM for Federated Learning

E. Zuazua. FedADMM-InSa: An Inexact and Self-Adaptive ADMM for Federated Learning (2024) https://doi.org/10.48550/arXiv.2402.13989

Abstract. Federated learning (FL) is a promising framework for learning from distributed data while maintaining privacy. The development of efficient FL algorithms encounters various challenges, including heterogeneous data and systems, limited communication capacities, and constrained local computational resources. Recently developed FedADMM methods show great resilience to both data and system heterogeneity. However, they still suffer from performance deterioration if the hyperparameters are not carefully tuned. To address this issue, we propose an inexact and self-adaptive FedADMM algorithm, termed FedADMM-InSa. First, we design an inexactness criterion for the clients’ local updates to eliminate the need for empirically setting the local training accuracy. This inexactness criterion can be assessed by each client independently based on its unique condition, thereby reducing the local computational cost and mitigating the undesirable straggle effect. The convergence of the resulting inexact ADMM is proved under the assumption of strongly convex loss functions. Additionally, we present a self-adaptive scheme that dynamically adjusts each client’s penalty parameter, enhancing algorithm robustness by mitigating the need for empirical penalty parameter choices for each client. Extensive numerical experiments on both synthetic and real-world datasets are conducted. As validated by some numerical tests, our proposed algorithm can reduce the clients’ local computational load significantly and also accelerate the learning process compared to the vanilla FedADMM.

Read Full Paper

arxiv: 2402.13989

Post navigation

Previous Post
Fourier series and sidewise control of 1-d waves
Next Post
Interplay between depth and width for interpolation in neural ODEs

Last Publications

Clustering in Pure-Attention Hardmax Transformers and its Role in Sentiment Analysis

A potential game perspective in Federated Learning

Regional and Partial Observability and Control of Waves

Cluster-based classification with neural ODEs via control

Optimal convergence rates for the finite element approximation of the Sobolev constant

  • DeustoCCM Seminar: Universal approximation and convexified training in neural networks
  • Collaboration meeting with CIC bioGUNE (June 9, 2025)
  • From Control to Machine Learning: Conference in honor of Enrique Zuazua’s 65th birthday
  • Clustering in Pure-Attention Hardmax Transformers and its Role in Sentiment Analysis
  • DeustoCCM Seminar: Soluciones de viscosidad: Teoría y aplicaciones
Copyright 2016 - 2025 — cmc.deusto.eus. All rights reserved. Chair of Computational Mathematics, Deusto Foundation - University of Deusto
Scroll to Top
  • Aviso Legal
  • Política de Privacidad
  • Política de Cookies
  • Configuración de Cookies
WE USE COOKIES ON THIS SITE TO ENHANCE USER EXPERIENCE. We also use analytics. By navigating any page you are giving your consent for us to set cookies.    more information
Privacidad