Xiangru Lian
Xiangru Lian
Department of Computer Science, University of Rochester
Verified email at ur.rochester.edu - Homepage
Title
Cited by
Cited by
Year
Asynchronous parallel stochastic gradient for nonconvex optimization
X Lian, Y Huang, Y Li, J Liu
Advances in Neural Information Processing Systems, 2737-2745, 2015
2802015
Can decentralized algorithms outperform centralized algorithms? a case study for decentralized parallel stochastic gradient descent
X Lian, C Zhang, H Zhang, CJ Hsieh, W Zhang, J Liu
Advances in Neural Information Processing Systems, 5330-5340, 2017
2682017
Staleness-aware Async-SGD for Distributed Deep Learning
W Zhang, S Gupta, X Lian, J Liu
International Joint Conference on Artificial Intelligence, 2016
1392016
Asynchronous decentralized parallel stochastic gradient descent
X Lian, W Zhang, C Zhang, J Liu
International Conference on Machine Learning, 3043-3052, 2018
1222018
D: Decentralized Training over Decentralized Data
H Tang, X Lian, M Yan, C Zhang, J Liu
arXiv preprint arXiv:1803.07068, 2018
782018
A Comprehensive Linear Speedup Analysis for Asynchronous Stochastic Parallel Optimization from Zeroth-Order to First-Order
X Lian, H Zhang, CJ Hsieh, Y Huang, J Liu
Advances in Neural Information Processing Systems, 2016
512016
Finite-sum Composition Optimization via Variance Reduced Gradient Descent
X Lian, M Wang, J Liu
Artificial Intelligence and Statistics, 2017
432017
Doublesqueeze: Parallel stochastic gradient descent with double-pass error-compensated compression
H Tang, C Yu, X Lian, T Zhang, J Liu
International Conference on Machine Learning, 6155-6165, 2019
422019
Asynchronous Parallel Greedy Coordinate Descent
Y You*, X Lian*(equal contribution), J Liu, HF Yu, I Dhillon, J Demmel, ...
Advances in Neural Information Processing Systems, 2016
382016
NMR evidence for field-induced ferromagnetism in (Li 0.8 Fe 0.2) OHFeSe superconductor
YP Wu, D Zhao, XR Lian, XF Lu, NZ Wang, XG Luo, XH Chen, T Wu
Physical Review B 91 (12), 125107, 2015
102015
Revisit Batch Normalization: New Understanding and Refinement via Composition Optimization
X Lian, J Liu
The 22nd International Conference on Artificial Intelligence and Statistics …, 2019
82019
Efficient smooth non-convex stochastic compositional optimization via stochastic recursive gradient descent
W Hu, CJ Li, X Lian, J Liu, H Yuan
Advances in Neural Information Processing Systems, 6929-6937, 2019
52019
Staleness-aware Async-SGD for Distributed Deep Learning. CoRR abs/1511.05950 (2015)
W Zhang, S Gupta, X Lian, J Liu
52015
Revisit Batch Normalization: New Understanding from an Optimization View and a Refinement via Composition Optimization
X Lian, J Liu
arXiv preprint arXiv:1810.06177, 2018
22018
Stochastic Recursive Momentum for Policy Gradient Methods
H Yuan, X Lian, J Liu, Y Zhou
arXiv preprint arXiv:2003.04302, 2020
12020
APMSqueeze: A Communication Efficient Adam-Preconditioned Momentum SGD Algorithm
H Tang, S Gan, S Rajbhandari, X Lian, C Zhang, J Liu, Y He
arXiv preprint arXiv:2008.11343, 2020
2020
Stochastic Recursive Variance Reduction for Efficient Smooth Non-Convex Compositional Optimization
H Yuan, X Lian, J Liu
arXiv preprint arXiv:1912.13515, 2019
2019
: Parallel Stochastic Gradient Descent with Double-Pass Error-Compensated Compression
H Tang, X Lian, S Qiu, L Yuan, C Zhang, T Zhang, J Liu
arXiv preprint arXiv:1907.07346, 2019
2019
: Decentralization Meets Error-Compensated Compression
H Tang, X Lian, S Qiu, L Yuan, C Zhang, T Zhang, J Liu
arXiv, arXiv: 1907.07346, 2019
2019
Large Scale Optimization for Deep Learning
X Lian
University of Rochester, 2019
2019
The system can't perform the operation now. Try again later.
Articles 1–20