Ji Liu
Cited by
Cited by
Tensor Completion for Estimating Missing Values in Visual Data
J Liu, P Musialski, P Wonka, J Ye
IEEE Transactions on Pattern Analysis and Machine Intelligence 35 (1), 208-220, 2013
Sparse reconstruction cost for abnormal event detection
Y Cong, J Yuan, J Liu
CVPR 2011, 3449-3456, 2011
Can decentralized algorithms outperform centralized algorithms? a case study for decentralized parallel stochastic gradient descent
X Lian, C Zhang, H Zhang, CJ Hsieh, W Zhang, J Liu
arXiv preprint arXiv:1705.09056, 2017
Asynchronous Parallel Stochastic Gradient for Nonconvex Optimization
X Lian, Y Huang, Y Li, J Liu
NIPS arXiv preprint arXiv:1506.08272, 2015
An Asynchronous Parallel Stochastic Coordinate Descent Algorithm
J Liu, SJ Wright, C Ré, V Bittorf, S Sridhar
arXiv:1311.1873, 2013
Gradient sparsification for communication-efficient distributed optimization
J Wangni, J Wang, J Liu, T Zhang
arXiv preprint arXiv:1710.09854, 2017
Abnormal event detection in crowded scenes using sparse representation
Y Cong, J Yuan, J Liu
Pattern Recognition 46 (7), 1851-1864, 2013
Asynchronous decentralized parallel stochastic gradient descent
X Lian, W Zhang, C Zhang, J Liu
International Conference on Machine Learning, 3043-3052, 2018
Asynchronous stochastic coordinate descent: Parallelism and convergence properties
J Liu, SJ Wright
SIAM Journal on Optimization 25 (1), 351-376, 2015
Staleness-aware Async-SGD for Distributed Deep Learning
W Zhang, S Gupta, X Lian, J Liu
IJCAI, arXiv preprint arXiv:1511.05950, 2015
Zipml: Training linear models with end-to-end low precision, and a little bit of deep learning
H Zhang, J Li, K Kara, D Alistarh, J Liu, C Zhang
International Conference on Machine Learning, 4035-4043, 2017
: Decentralized Training over Decentralized Data
H Tang, X Lian, M Yan, C Zhang, J Liu
International Conference on Machine Learning, 4848-4856, 2018
Communication compression for decentralized training
H Tang, S Gan, C Zhang, T Zhang, J Liu
arXiv preprint arXiv:1803.06443, 2018
Learning incoherent sparse and low-rank patterns from multiple tasks
J Chen, J Liu, J Ye
ACM Transaction on Knowledge Discovery from Data 5 (4), 2012
Finite-Sample Analysis of Proximal Gradient TD Algorithms
B Liu, J Liu, M Ghavamzadeh, S Mahadevan, M Petrik
UAI, 2015
Doublesqueeze: Parallel stochastic gradient descent with double-pass error-compensated compression
H Tang, C Yu, X Lian, T Zhang, J Liu
International Conference on Machine Learning, 6155-6165, 2019
Exclusive Feature Learning on Arbitrary Structures via -norm
D Kong, R Fujimaki, J Liu, F Nie, C Ding
Advances in neural information processing systems, 1655-1663, 2014
Accelerating stochastic composition optimization
M Wang, J Liu, EX Fang
Journal of Machine Learning Research, 2017
An Accelerated Randomized Kaczmarz Algorithm
J Liu, SJ Wright
arXiv:1310.2887, 2013
Sparse non-negative tensor factorization using columnwise coordinate descent
J Liu, J Liu, P Wonka, J Ye
Pattern Recognition 45 (1), 649-656, 2012
The system can't perform the operation now. Try again later.
Articles 1–20