Energy-Efficient Federated Learning in Edge Networks Using Sparse Update Compression and ADMM-Convergent Scheduling Under Varying Load Conditions
Keywords:
Federated Learning; Edge Networks; Sparse Update Compression; ADMM Scheduling; Energy Efficiency; Load-Adaptive TrainingAbstract
Federated Learning (FL) at the network edge offers a promising route to privacy‐preserving model training across distributed devices. However, the stringent energy budgets and highly variable computational loads of edge nodes pose significant challenges: frequent gradient exchanges incur heavy communication overhead, and naïve client scheduling can stall convergence or exhaust device batteries. In this work, we introduce a joint sparse‐update compression and ADMM‐convergent scheduling framework to minimize overall energy consumption while preserving learning accuracy under time‐varying load conditions. First, each client applies a tunable sparsification and error‐feedback scheme to its local model updates, reducing uplink traffic by up to 90% with negligible impact on convergence. Second, we cast client selection and aggregation timing as an Alternating Direction Method of Multipliers (ADMM) subproblem, deriving provably convergent update rules that adaptively prioritize low‐energy or under‐loaded nodes. Through simulations on CIFAR-10 and FEMNIST benchmarks with realistic edge‐cloud latency and load traces, our approach achieves up to 35% reduction in per‐round energy cost and 20% faster convergence compared to state-of-the-art FL protocols. These results demonstrate that integrated compression and scheduling is key to energy‐efficient, robust FL in resource‐constrained edge networks.
Downloads
Metrics
References
Huang, T., & Wen, C. (2021). Sparsification in Federated Learning. IEEE Transactions on Neural Networks and Learning Systems, 32(7), 2925–2936. https://doi.org/10.1109/TNNLS.2020.3036852
Konečný, J., McMahan, H. B., Yu, F., Richtárik, P., Suresh, A. T., & Bacon, D. (2016). Federated Learning: Strategies for Improving Communication Efficiency. Proceedings of the NeurIPS Workshop on Private Multi-Party Machine Learning.
Li, Q., Wen, Z., Wu, Z., Hu, S., Wang, N., & Li, J. (2020). Fair Resource Allocation for Federated Learning. Proceedings of the 37th International Conference on Machine Learning, 119, 6322–6332.
Lin, Y., Han, S., Mao, H., Wang, Y., & Dally, W. J. (2020). Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training. Proceedings of the 37th International Conference on Machine Learning, 80, 3539–3549.
McMahan, H. B., Moore, E., Ramage, D., Hampson, S., & y Arcas, B. A. (2017). Communication Efficient Learning of Deep Networks from Decentralized Data. Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, 54, 1273–1282.
Nishio, T., & Yonetani, R. (2019). Client Selection for Federated Learning with Heterogeneous Resources in Mobile Edge. Proceedings of the IEEE International Conference on Communications, 1–7. https://doi.org/10.1109/ICC.2019.8761532
Suresh, A. T., Yu, F. X., Kumar, S., McMahan, H. B., & Sarwate, A. D. (2017). Distributed Mean Estimation with Limited Communication. Proceedings of the 34th International Conference on Machine Learning, 70, 3329–3339.
Wang, S., Tuor, T., Salonidis, T., Leung, K. K., Makaya, C., He, T., & Chan, K. (2019). Adaptive Federated Learning in Resource Constrained Edge Computing Systems. IEEE Journal on Selected Areas in Communications, 37(6), 1205–1221. https://doi.org/10.1109/JSAC.2019.2908606
Zhang, C., Fan, X., Li, M., & Zhu, T. (2021). Energy Efficient Federated Learning at the Wireless Edge. IEEE Journal on Selected Areas in Communications, 39(12), 3613–3627. https://doi.org/10.1109/JSAC.2021.3119982
Boyd, S., Parikh, N., Chu, E., Peleato, B., & Eckstein, J. (2011). Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers. Foundations and Trends® in Machine Learning, 3(1), 1–122. https://doi.org/10.1561/2200000016
Barakkath Nisha, U., Subair, A., & Abdullah, R. Y. (2020). An Efficient Algorithm for Anomaly Detection in Wireless Sensor Networks. 2020 International Conference on Smart Electronics and Communication (ICOSEC), 925–932. https://doi.org/10.1109/ICOSEC49089.2020.9215258
Barakkath Nisha, U., Uma Maheswari, N., Venkatesh, R., et al. (2017). Fuzzy Based Flat Anomaly Diagnosis and Relief Measures in Distributed Wireless Sensor Network. International Journal of Fuzzy Systems, 19, 1528–1545. https://doi.org/10.1007/s40815-016-0253-2
Safa, M., Pandian, A., Mohammad, G. B., et al. (2024). Deep Spectral Time-Variant Feature Analytic Model for Cardiac Disease Prediction Using Softmax Recurrent Neural Network in WSN-IoT. Journal of Electrical Engineering & Technology, 19, 2651–2665. https://doi.org/10.1007/s42835-023-01748-w
Saranya, S. S., Anusha, P., Chandragandhi, S., Kishore, O. K., Kumar, N. P., & Srihari, K. (2024). Enhanced Decision Making in Healthcare Cloud-Edge Networks Using Deep Reinforcement and Lion Optimization Algorithm. Biomedical Signal Processing and Control, 92, 105963. https://doi.org/10.1016/j.bspc.2024.105963
Downloads
Published
How to Cite
Issue
Section
License

This work is licensed under a Creative Commons Attribution 4.0 International License.
You are free to:
- Share — copy and redistribute the material in any medium or format
- Adapt — remix, transform, and build upon the material for any purpose, even commercially.
Terms:
- Attribution — You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
- No additional restrictions — You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits.