Rate distortion optimization for adaptive gradient quantization in federated learning
Federated Learning (FL) is an emerging machine learning framework designed to preserve privacy. However, the continuous updating of model parameters over uplink channels with limited throughput leads to a huge communication overload, which is a major challenge for FL. To address this issue, we propo...
Saved in:
| Main Authors: | , , , , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
KeAi Communications Co., Ltd.
2024-12-01
|
| Series: | Digital Communications and Networks |
| Subjects: | |
| Online Access: | http://www.sciencedirect.com/science/article/pii/S235286482400018X |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1846101649186095104 |
|---|---|
| author | Guojun Chen Kaixuan Xie Wenqiang Luo Yinfei Xu Lun Xin Tiecheng Song Jing Hu |
| author_facet | Guojun Chen Kaixuan Xie Wenqiang Luo Yinfei Xu Lun Xin Tiecheng Song Jing Hu |
| author_sort | Guojun Chen |
| collection | DOAJ |
| description | Federated Learning (FL) is an emerging machine learning framework designed to preserve privacy. However, the continuous updating of model parameters over uplink channels with limited throughput leads to a huge communication overload, which is a major challenge for FL. To address this issue, we propose an adaptive gradient quantization approach that enhances communication efficiency. Aiming to minimize the total communication costs, we consider both the correlation of gradients between local clients and the correlation of gradients between communication rounds, namely, in the time and space dimensions. The compression strategy is based on rate distortion theory, which allows us to find an optimal quantization strategy for the gradients. To further reduce the computational complexity, we introduce the Kalman filter into the proposed approach. Finally, numerical results demonstrate the effectiveness and robustness of the proposed rate-distortion optimization adaptive gradient quantization approach in significantly reducing the communication costs when compared to other quantization methods. |
| format | Article |
| id | doaj-art-aa5f073d876b478a99b5c0c69d4825fc |
| institution | Kabale University |
| issn | 2352-8648 |
| language | English |
| publishDate | 2024-12-01 |
| publisher | KeAi Communications Co., Ltd. |
| record_format | Article |
| series | Digital Communications and Networks |
| spelling | doaj-art-aa5f073d876b478a99b5c0c69d4825fc2024-12-29T04:47:37ZengKeAi Communications Co., Ltd.Digital Communications and Networks2352-86482024-12-0110618131825Rate distortion optimization for adaptive gradient quantization in federated learningGuojun Chen0Kaixuan Xie1Wenqiang Luo2Yinfei Xu3Lun Xin4Tiecheng Song5Jing Hu6National Mobile Communication Research Laboratory, Southeast University, Nanjing 210096, China; School of Information Science and Engineering, Southeast University, Nanjing, 210096, ChinaChina Mobile Research Institute, Beijing 100053, ChinaSchool of Information Science and Engineering, Southeast University, Nanjing, 210096, ChinaSchool of Information Science and Engineering, Southeast University, Nanjing, 210096, ChinaChina Mobile Research Institute, Beijing 100053, ChinaNational Mobile Communication Research Laboratory, Southeast University, Nanjing 210096, China; School of Information Science and Engineering, Southeast University, Nanjing, 210096, China; Corresponding author at: National Mobile Communication Research Laboratory, Southeast University, Nanjing 210096, China.National Mobile Communication Research Laboratory, Southeast University, Nanjing 210096, China; School of Information Science and Engineering, Southeast University, Nanjing, 210096, ChinaFederated Learning (FL) is an emerging machine learning framework designed to preserve privacy. However, the continuous updating of model parameters over uplink channels with limited throughput leads to a huge communication overload, which is a major challenge for FL. To address this issue, we propose an adaptive gradient quantization approach that enhances communication efficiency. Aiming to minimize the total communication costs, we consider both the correlation of gradients between local clients and the correlation of gradients between communication rounds, namely, in the time and space dimensions. The compression strategy is based on rate distortion theory, which allows us to find an optimal quantization strategy for the gradients. To further reduce the computational complexity, we introduce the Kalman filter into the proposed approach. Finally, numerical results demonstrate the effectiveness and robustness of the proposed rate-distortion optimization adaptive gradient quantization approach in significantly reducing the communication costs when compared to other quantization methods.http://www.sciencedirect.com/science/article/pii/S235286482400018XFederated learningCommunication efficiencyAdaptive quantizationRate distortion |
| spellingShingle | Guojun Chen Kaixuan Xie Wenqiang Luo Yinfei Xu Lun Xin Tiecheng Song Jing Hu Rate distortion optimization for adaptive gradient quantization in federated learning Digital Communications and Networks Federated learning Communication efficiency Adaptive quantization Rate distortion |
| title | Rate distortion optimization for adaptive gradient quantization in federated learning |
| title_full | Rate distortion optimization for adaptive gradient quantization in federated learning |
| title_fullStr | Rate distortion optimization for adaptive gradient quantization in federated learning |
| title_full_unstemmed | Rate distortion optimization for adaptive gradient quantization in federated learning |
| title_short | Rate distortion optimization for adaptive gradient quantization in federated learning |
| title_sort | rate distortion optimization for adaptive gradient quantization in federated learning |
| topic | Federated learning Communication efficiency Adaptive quantization Rate distortion |
| url | http://www.sciencedirect.com/science/article/pii/S235286482400018X |
| work_keys_str_mv | AT guojunchen ratedistortionoptimizationforadaptivegradientquantizationinfederatedlearning AT kaixuanxie ratedistortionoptimizationforadaptivegradientquantizationinfederatedlearning AT wenqiangluo ratedistortionoptimizationforadaptivegradientquantizationinfederatedlearning AT yinfeixu ratedistortionoptimizationforadaptivegradientquantizationinfederatedlearning AT lunxin ratedistortionoptimizationforadaptivegradientquantizationinfederatedlearning AT tiechengsong ratedistortionoptimizationforadaptivegradientquantizationinfederatedlearning AT jinghu ratedistortionoptimizationforadaptivegradientquantizationinfederatedlearning |