Federated learning algorithm based on knowledge distillation
Federated learning is a new scheme of distributed machine learning, which enables a large number of edge computing devices to jointly learn a shared model without private data sharing. Federated learning allows nodes to synchronize only the locally trained models instead of their own private data, w...
| Main Authors: | , , |
|---|---|
| Format: | Conference or Workshop Item |
| Language: | English |
| Published: |
2021
|
| Subjects: | |
| Online Access: | https://eprints.nottingham.ac.uk/65181/ |
| _version_ | 1848800196146757632 |
|---|---|
| author | Jiang, Donglin Shan, Chen Zhang, Zhihui |
| author_facet | Jiang, Donglin Shan, Chen Zhang, Zhihui |
| author_sort | Jiang, Donglin |
| building | Nottingham Research Data Repository |
| collection | Online Access |
| description | Federated learning is a new scheme of distributed machine learning, which enables a large number of edge computing devices to jointly learn a shared model without private data sharing. Federated learning allows nodes to synchronize only the locally trained models instead of their own private data, which provides a guarantee for privacy and security. However, due to the challenges of heterogeneity in federated learning, which are: (1) heterogeneous model architecture among devices; (2) statistical heterogeneity in real federated dataset, which do not obey independent-identical-distribution, resulting in poor performance of traditional federated learning algorithms. To solve the problems above, this paper proposes FedDistill, a new distributed training method based on knowledge distillation. By introducing personalized model on each device, the personalized model aims to improve the local performance even in a situation that global model fails to adapt to the local dataset, thereby improving the ability and robustness of the global model. The improvement of the performance of local device benefits from the effect of knowledge distillation, which can guide the improvement of global model by knowledge transfer between heterogeneous networks. Experiments show that FedDistill can significantly improve the accuracy of classification tasks and meet the needs of heterogeneous users. |
| first_indexed | 2025-11-14T20:47:43Z |
| format | Conference or Workshop Item |
| id | nottingham-65181 |
| institution | University of Nottingham Malaysia Campus |
| institution_category | Local University |
| language | English |
| last_indexed | 2025-11-14T20:47:43Z |
| publishDate | 2021 |
| recordtype | eprints |
| repository_type | Digital Repository |
| spelling | nottingham-651812021-05-07T08:51:01Z https://eprints.nottingham.ac.uk/65181/ Federated learning algorithm based on knowledge distillation Jiang, Donglin Shan, Chen Zhang, Zhihui Federated learning is a new scheme of distributed machine learning, which enables a large number of edge computing devices to jointly learn a shared model without private data sharing. Federated learning allows nodes to synchronize only the locally trained models instead of their own private data, which provides a guarantee for privacy and security. However, due to the challenges of heterogeneity in federated learning, which are: (1) heterogeneous model architecture among devices; (2) statistical heterogeneity in real federated dataset, which do not obey independent-identical-distribution, resulting in poor performance of traditional federated learning algorithms. To solve the problems above, this paper proposes FedDistill, a new distributed training method based on knowledge distillation. By introducing personalized model on each device, the personalized model aims to improve the local performance even in a situation that global model fails to adapt to the local dataset, thereby improving the ability and robustness of the global model. The improvement of the performance of local device benefits from the effect of knowledge distillation, which can guide the improvement of global model by knowledge transfer between heterogeneous networks. Experiments show that FedDistill can significantly improve the accuracy of classification tasks and meet the needs of heterogeneous users. 2021-03-01 Conference or Workshop Item PeerReviewed application/pdf en cc_by https://eprints.nottingham.ac.uk/65181/1/Title%20Pages%20Example%20%200.6-%E5%B7%B2%E8%9E%8D%E5%90%88%20%284%29.pdf Jiang, Donglin, Shan, Chen and Zhang, Zhihui (2021) Federated learning algorithm based on knowledge distillation. In: 2020 International Conference on Artificial Intelligence and Computer Engineering (ICAICE). Federated learning; Knowledge distillation; Non-independent-identical-distribution; Heterogeneous network http://dx.doi.org/10.1109/ICAICE51518.2020.00038 10.1109/ICAICE51518.2020.00038 10.1109/ICAICE51518.2020.00038 10.1109/ICAICE51518.2020.00038 |
| spellingShingle | Federated learning; Knowledge distillation; Non-independent-identical-distribution; Heterogeneous network Jiang, Donglin Shan, Chen Zhang, Zhihui Federated learning algorithm based on knowledge distillation |
| title | Federated learning algorithm based on knowledge distillation |
| title_full | Federated learning algorithm based on knowledge distillation |
| title_fullStr | Federated learning algorithm based on knowledge distillation |
| title_full_unstemmed | Federated learning algorithm based on knowledge distillation |
| title_short | Federated learning algorithm based on knowledge distillation |
| title_sort | federated learning algorithm based on knowledge distillation |
| topic | Federated learning; Knowledge distillation; Non-independent-identical-distribution; Heterogeneous network |
| url | https://eprints.nottingham.ac.uk/65181/ https://eprints.nottingham.ac.uk/65181/ https://eprints.nottingham.ac.uk/65181/ |