Folgen
Sebastian Urban Stich
Sebastian Urban Stich
CISPA Helmholtz Center
Bestätigte E-Mail-Adresse bei cispa.de - Startseite
Titel
Zitiert von
Zitiert von
Jahr
Advances and open problems in federated learning
P Kairouz, HB McMahan, B Avent, A Bellet, M Bennis, AN Bhagoji, ...
Foundations and Trends® in Machine Learning 14 (1–2), 1-210, 2021
53342021
SCAFFOLD: Stochastic Controlled Averaging for Federated Learning
SP Karimireddy, S Kale, M Mohri, SJ Reddi, SU Stich, AT Suresh
ICML 2020 - International Conference on Machine Learning, 2019
2435*2019
Local SGD Converges Fast and Communicates Little
SU Stich
ICLR 2019 - International Conference on Learning Representations, 2019
10432019
Ensemble Distillation for Robust Model Fusion in Federated Learning
T Lin, L Kong, SU Stich, M Jaggi
NeurIPS 2020 - Advances in Neural Information Processing Systems 33, 2020
8052020
Sparsified SGD with memory
SU Stich, JB Cordonnier, M Jaggi
NeurIPS 2018 - Advances in Neural Information Processing Systems, 4448-4459, 2018
7752018
Decentralized Stochastic Optimization and Gossip Algorithms with Compressed Communication
A Koloskova, SU Stich, M Jaggi
ICML 2019 - International Conference on Machine Learning, 2019
5052019
Error Feedback Fixes SignSGD and other Gradient Compression Schemes
SP Karimireddy, Q Rebjock, SU Stich, M Jaggi
ICML 2019 - International Conference on Machine Learning, 2019
4942019
Don't Use Large Mini-Batches, Use Local SGD
T Lin, SU Stich, KK Patel, M Jaggi
ICLR 2020 - International Conference on Learning Representations, 2020
4592020
A Unified Theory of Decentralized SGD with Changing Topology and Local Updates
A Koloskova, N Loizou, S Boreiri, M Jaggi, SU Stich
ICML 2020 - International Conference on Machine Learning, 2020
4472020
A Field Guide to Federated Optimization
J Wang, Z Charles, Z Xu, G Joshi, HB McMahan, M Al-Shedivat, G Andrew, ...
arXiv preprint arXiv:2107.06917, 2021
3192021
The Error-Feedback Framework: Better Rates for SGD with Delayed Gradients and Compressed Updates
SU Stich, SP Karimireddy
Journal of Machine Learning Research 21, 1-36, 2020
265*2020
Is Local SGD Better than Minibatch SGD?
B Woodworth, KK Patel, SU Stich, Z Dai, B Bullins, HB McMahan, ...
ICML 2020 - International Conference on Machine Learning, 2020
2582020
Breaking the centralized barrier for cross-device federated learning
SP Karimireddy, M Jaggi, S Kale, M Mohri, S Reddi, SU Stich, AT Suresh
NeurIPS 2021 - Advances in Neural Information Processing Systems 34, 2021
240*2021
Decentralized Deep Learning with Arbitrary Communication Compression
A Koloskova, T Lin, SU Stich, M Jaggi
ICLR 2020 - International Conference on Learning Representations, 2020
2242020
Dynamic Model Pruning with Feedback
T Lin, SU Stich, L Barba, D Dmitriev, M Jaggi
ICLR 2020 - International Conference on Learning Representations, 2020
2052020
Stochastic distributed learning with gradient quantization and double-variance reduction
S Horváth, D Kovalev, K Mishchenko, P Richtárik, S Stich
Optimization Methods and Software 38 (1), 91-106, 2023
1722023
Efficiency of the Accelerated Coordinate Descent Method on Structured Optimization Problems
Y Nesterov, SU Stich
SIAM Journal on Optimization 27 (1), 110-123, 2017
1592017
On the Convergence of SGD with Biased Gradients
A Ajalloeian, SU Stich
ICML 2020 Workshop - Beyond First Order Methods in ML Systems, arXiv …, 2020
116*2020
Unified Optimal Analysis of the (Stochastic) Gradient Method
SU Stich
arXiv preprint arXiv:1907.04232, 2019
1122019
ProxSkip: Yes! Local Gradient Steps Provably Lead to Communication Acceleration! Finally!
K Mishchenko, G Malinovsky, S Stich, P Richtárik
ICML 2022 - International Conference on Machine Learning, 2022
1072022
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–20