Overview Statistic: PDF-Downloads (blue) and Frontdoor-Views (gray)

On the byzantine-resilience of distillation-based federated learning

  • Federated Learning (FL) algorithms using Knowledge Distillation (KD) have received increasing attention due to their favorable properties with respect to privacy, non-i.i.d. data and communication cost. These methods depart from transmitting model parameters and instead communicate information about a learning task by sharing predictions on a public dataset. In this work, we study the performance of such approaches in the byzantine setting, where a subset of the clients act in an adversarial manner aiming to disrupt the learning process. We show that KD-based FL algorithms are remarkably resilient and analyze how byzantine clients can influence the learning process. Based on these insights, we introduce two new byzantine attacks and demonstrate their ability to break existing byzantine-resilient methods. Additionally, we propose a novel defence method which enhances the byzantine resilience of KD-based FL algorithms. Finally, we provide a general framework to obfuscate attacks, making them significantly harder to detect, thereby improving their effectiveness.

Export metadata

Additional Services

Search Google Scholar Statistics - number of accesses to the document
Metadaten
Author:Christophe Roux, Max Zimmer, Sebastian Pokutta
Document Type:In Proceedings
Parent Title (English):13th International Conference on Learning Representations (ICLR 2025)
Year of first publication:2025
ArXiv Id:http://arxiv.org/abs/2402.12265
URL:https://openreview.net/pdf?id=of6EuHT7de
Accept ✔
Diese Webseite verwendet technisch erforderliche Session-Cookies. Durch die weitere Nutzung der Webseite stimmen Sie diesem zu. Unsere Datenschutzerklärung finden Sie hier.