Follow
Yi Xu
Title
Cited by
Cited by
Year
Dash: Semi-supervised learning with dynamic thresholding
Y Xu, L Shang, J Ye, Q Qian, YF Li, B Sun, H Li, R Jin
International Conference on Machine Learning, 11525-11536, 2021
1662021
First-order stochastic algorithms for escaping from saddle points in almost linear time
Y Xu, R Jin, T Yang
Advances in Neural Information Processing Systems, 5530-5540, 2018
1302018
Practical and theoretical considerations in study design for detecting gene-gene interactions using MDR and GMDR approaches
GB Chen, Y Xu, HM Xu, MD Li, J Zhu, XY Lou
PloS one 6 (2), e16981, 2011
712011
Optimal Epoch Stochastic Gradient Descent Ascent Methods for Min-Max Optimization
Y Yan, Y Xu, Q Lin, W Liu, T Yang
Advances in Neural Information Processing Systems 33, 5789-5800, 2020
62*2020
ADMM without a fixed penalty parameter: Faster convergence with new adaptive penalization
Y Xu, M Liu, Q Lin, T Yang
Advances in neural information processing systems 30, 2017
582017
On stochastic moving-average estimators for non-convex optimization
Z Guo, Y Xu, W Yin, R Jin, T Yang
arXiv preprint arXiv:2104.14840, 2021
512021
Chex: Channel exploration for cnn model compression
Z Hou, M Qin, F Sun, X Ma, K Yuan, Y Xu, YK Chen, R Jin, Y Xie, SY Kung
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2022
492022
Self-supervised pre-training for transformer-based person re-identification
H Luo, P Wang, Y Xu, F Ding, Y Zhou, F Wang, H Li, R Jin
arXiv preprint arXiv:2111.12084, 2021
452021
Stochastic convex optimization: Faster local growth implies faster global convergence
Y Xu, Q Lin, T Yang
International Conference on Machine Learning, 3821-3830, 2017
442017
Stochastic optimization for DC functions and non-smooth non-convex regularizers with non-asymptotic convergence
Y Xu, Q Qi, Q Lin, R Jin, T Yang
International conference on machine learning, 6942-6951, 2019
412019
Towards understanding label smoothing
Y Xu, Y Xu, Q Qian, H Li, R Jin
arXiv preprint arXiv:2006.11653, 2020
392020
Sadagrad: Strongly adaptive stochastic gradient methods
Z Chen*, Y Xu*, E Chen, T Yang
International Conference on Machine Learning, 913-921, 2018
332018
Improved fine-tuning by better leveraging pre-training data
Z Liu, Y Xu, Y Xu, Q Qian, H Li, X Ji, A Chan, R Jin
Advances in Neural Information Processing Systems 35, 32568-32581, 2022
29*2022
An online method for a class of distributionally robust optimization with non-convex objectives
Q Qi, Z Guo, Y Xu, R Jin, T Yang
Advances in Neural Information Processing Systems 34, 2021
292021
Federated Deep AUC Maximization for Heterogeneous Data with a Constant Communication Complexity
Z Yuan, Z Guo, Y Xu, Y Ying, T Yang
International Conference on Machine Learning, 12219-12229, 2021
292021
Stochastic Primal-Dual Algorithms with Faster Convergence than for Problems without Bilinear Structure
Y Yan, Y Xu, Q Lin, L Zhang, T Yang
arXiv preprint arXiv:1904.10112, 2019
282019
Effective model sparsification by scheduled grow-and-prune methods
X Ma, M Qin, F Sun, Z Hou, K Yuan, Y Xu, Y Wang, YK Chen, R Jin, Y Xie
arXiv preprint arXiv:2106.09857, 2021
272021
Learning with non-convex truncated losses by SGD
Y Xu, S Zhu, S Yang, C Zhang, R Jin, T Yang
Uncertainty in Artificial Intelligence, 701-711, 2020
272020
Homotopy Smoothing for Non-Smooth Problems with Lower Complexity than
Y Xu*, Y Yan*, Q Lin, T Yang
Advances In Neural Information Processing Systems 29, 1208-1216, 2016
272016
A novel convergence analysis for algorithms of the adam family
Z Guo, Y Xu, W Yin, R Jin, T Yang
arXiv preprint arXiv:2112.03459, 2021
262021
The system can't perform the operation now. Try again later.
Articles 1–20