Shaoduo Gan
Shaoduo Gan
Verified email at
Cited by
Cited by
Communication compression for decentralized training
H Tang, S Gan, C Zhang, T Zhang, J Liu
NeurIPS 2018, 2018
Towards Demystifying Serverless Machine Learning Training
J Jiang*, S Gan*, Y Liu, F Wang, G Alonso, A Klimovic, A Singla, W Wu, ...
SIGMOD 2021, 2021
1-bit Adam: Communication Efficient Large-Scale Training with Adam's Convergence Speed
H Tang, S Gan, AA Awan, S Rajbhandari, C Li, X Lian, J Liu, C Zhang, ...
ICML 2021, 2021
Ease. ML: A Lifecycle Management System for Machine Learning
L Aguilar Melgar, D Dao, S Gan, NM Gürel, N Hollenstein, J Jiang, ...
CIDR 2021, 2021
Bagua: Scaling up Distributed Learning with System Relaxations
S Gan, X Lian, R Wang, J Chang, C Liu, H Shi, S Zhang, X Li, T Sun, ...
VLDB 2022, 2022
Few-shot named entity recognition with entity-level prototypical network enhanced by dispersedly distributed prototypes
B Ji, S Li, S Gan, J Yu, J Ma, H Liu
COLING 2022, 2022
In-Database Machine Learning with CorgiPile: Stochastic Gradient Descent without Full Data Shuffle
L Xu, S Qiu, B Yuan, J Jiang, C Renggli, S Gan, K Kara, G Li, J Liu, W Wu, ...
SIGMOD 2022, 2022
Fruda: Framework for distributed adversarial domain adaptation
S Gan, A Mathur, A Isopoussu, F Kawsar, N Berthouze, ND Lane
IEEE Transactions on Parallel and Distributed Systems 33 (11), 3153-3164, 2021
A systematic evaluation of machine learning on serverless infrastructure
J Jiang*, S Gan*, B Du, G Alonso, A Klimovic, A Singla, W Wu, S Wang, ...
The VLDB Journal 33 (2), 425-449, 2024
Distributed Asynchronous Domain Adaptation: Towards Making Domain Adaptation More Practical in Real-World Systems
S Gan, A Mathur, A Isopoussu, N Berthouze, ND Lane, F Kawsar
Workshop on Systems for ML at NeurIPS 2019, 2019
Stochastic gradient descent without full data shuffle: with applications to in-database machine learning and deep learning systems
L Xu, S Qiu, B Yuan, J Jiang, C Renggli, S Gan, K Kara, G Li, J Liu, W Wu, ...
The VLDB Journal, 1-25, 2024
SqueezeAttention: 2D Management of KV-Cache in LLM Inference via Layer-wise Optimal Budget
Z Wang, S Gan
arXiv preprint arXiv:2404.04793, 2024
The system can't perform the operation now. Try again later.
Articles 1–12