Hanlin Tang
Title
Cited by
Cited by
Year
D: Decentralized Training over Decentralized Data
H Tang, X Lian, M Yan, C Zhang, J Liu
arXiv preprint arXiv:1803.07068, 2018
682018
Communication compression for decentralized training
H Tang, S Gan, C Zhang, T Zhang, J Liu
Advances in Neural Information Processing Systems, 7652-7662, 2018
60*2018
Doublesqueeze: Parallel stochastic gradient descent with double-pass error-compensated compression
H Tang, C Yu, X Lian, T Zhang, J Liu
International Conference on Machine Learning, 6155-6165, 2019
302019
Distributed learning over unreliable networks
C Yu, H Tang, C Renggli, S Kassing, A Singla, D Alistarh, C Zhang, J Liu
International Conference on Machine Learning, 7202-7212, 2019
72019
Central server free federated learning over single-sided trust social networks
C He, C Tan, H Tang, S Qiu, J Liu
arXiv preprint arXiv:1910.04956, 2019
62019
Decentralized Online Learning: Take Benefits from Others' Data without Sharing Your Own to Track Global Trend
Y Zhao, C Yu, P Zhao, H Tang, S Qiu, J Liu
arXiv preprint arXiv:1901.10593, 2019
62019
: Parallel Stochastic Gradient Descent with Double-Pass Error-Compensated Compression
H Tang, X Lian, S Qiu, L Yuan, C Zhang, T Zhang, J Liu
arXiv preprint arXiv:1907.07346, 2019
2019
: Decentralization Meets Error-Compensated Compression
H Tang, X Lian, S Qiu, L Yuan, C Zhang, T Zhang, J Liu
arXiv, arXiv: 1907.07346, 2019
2019
The system can't perform the operation now. Try again later.
Articles 1–8