Follow
Anastasia Koloskova
Anastasia Koloskova
PhD student, EPFL
Verified email at epfl.ch
Title
Cited by
Cited by
Year
Decentralized stochastic optimization and gossip algorithms with compressed communication
A Koloskova, SU Stich, M Jaggi
ICML 2019 - Proceedings of the 36th International Conference on Machine Learning, 2019
3852019
A unified theory of decentralized sgd with changing topology and local updates
A Koloskova, N Loizou, S Boreiri, M Jaggi, SU Stich
ICML 2020, 2020
2902020
Decentralized deep learning with arbitrary communication compression
A Koloskova, T Lin, SU Stich, M Jaggi
ICLR 2020, 2019
1712019
A linearly convergent algorithm for decentralized optimization: Sending less bits for free!
D Kovalev, A Koloskova, M Jaggi, P Richtarik, S Stich
International Conference on Artificial Intelligence and Statistics, 4087-4095, 2021
552021
Consensus control for decentralized deep learning
L Kong, T Lin, A Koloskova, M Jaggi, SU Stich
ICML 2021, 2021
462021
An improved analysis of gradient tracking for decentralized machine learning
A Koloskova, T Lin, SU Stich
Advances in Neural Information Processing Systems 34, 11422-11435, 2021
422021
Relaysum for decentralized deep learning on heterogeneous data
T Vogels, L He, A Koloskova, SP Karimireddy, T Lin, SU Stich, M Jaggi
Advances in Neural Information Processing Systems 34, 28004-28015, 2021
322021
Decentralized local stochastic extra-gradient for variational inequalities
A Beznosikov, P Dvurechenskii, A Koloskova, V Samokhin, SU Stich, ...
Advances in Neural Information Processing Systems 35, 38116-38133, 2022
222022
Efficient greedy coordinate descent for composite problems
SP Karimireddy, A Koloskova, SU Stich, M Jaggi
The 22nd International Conference on Artificial Intelligence and Statistics …, 2019
202019
Sharper convergence guarantees for asynchronous sgd for distributed and federated learning
A Koloskova, SU Stich, M Jaggi
NeurIPS 2022, 2022
122022
Data-heterogeneity-aware mixing for decentralized learning
Y Dandi, A Koloskova, M Jaggi, SU Stich
arXiv preprint arXiv:2204.06477, 2022
112022
Revisiting Gradient Clipping: Stochastic bias and tight convergence guarantees
A Koloskova, H Hendrikx, SU Stich
arXiv preprint arXiv:2305.01588, 2023
22023
Convergence of Gradient Descent with Linearly Correlated Noise and Applications to Differentially Private Learning
A Koloskova, R McKenna, Z Charles, K Rush, B McMahan
arXiv preprint arXiv:2302.01463, 2023
12023
Decentralized Gradient Tracking with Local Steps
Y Liu, T Lin, A Koloskova, SU Stich
arXiv preprint arXiv:2301.01313, 2023
12023
Shuffle SGD is Always Better than SGD: Improved Analysis of SGD with Arbitrary Data Orders
A Koloskova, N Doikov, SU Stich, M Jaggi
arXiv preprint arXiv:2305.19259, 2023
2023
Decentralized Stochastic Optimization with Client Sampling
Z Liu, A Koloskova, M Jaggi, T Lin
OPT 2022: Optimization for Machine Learning (NeurIPS 2022 Workshop), 0
The system can't perform the operation now. Try again later.
Articles 1–16