Hanlin Tang
TitleCited byYear
D: Decentralized Training over Decentralized Data
H Tang, X Lian, M Yan, C Zhang, J Liu
arXiv preprint arXiv:1803.07068, 2018
352018
Communication compression for decentralized training
H Tang, S Gan, C Zhang, T Zhang, J Liu
Advances in Neural Information Processing Systems, 7652-7662, 2018
30*2018
DoubleSqueeze: Parallel Stochastic Gradient Descent with Double-Pass Error-Compensated Compression
H Tang, X Lian, T Zhang, J Liu
arXiv preprint arXiv:1905.05957, 2019
72019
Distributed learning over unreliable networks
C Yu, H Tang, C Renggli, S Kassing, A Singla, D Alistarh, C Zhang, J Liu
arXiv preprint arXiv:1810.07766, 2018
42018
: Parallel Stochastic Gradient Descent with Double-Pass Error-Compensated Compression
H Tang, X Lian, S Qiu, L Yuan, C Zhang, T Zhang, J Liu
arXiv preprint arXiv:1907.07346, 2019
22019
Central Server Free Federated Learning over Single-sided Trust Social Networks
C He, C Tan, H Tang, S Qiu, J Liu
arXiv preprint arXiv:1910.04956, 2019
12019
The system can't perform the operation now. Try again later.
Articles 1–6