Follow
Gal Vardi
Title
Cited by
Cited by
Year
Reconstructing training data from trained neural networks
N Haim, G Vardi, G Yehudai, O Shamir, M Irani
Advances in Neural Information Processing Systems 35, 22911-22924, 2022
1542022
On the implicit bias in deep-learning algorithms
G Vardi
Communications of the ACM 66 (6), 86-93, 2023
932023
Implicit regularization in relu networks with the square loss
G Vardi, O Shamir
Conference on Learning Theory, 4224-4258, 2021
642021
Implicit regularization towards rank minimization in relu networks
N Timor, G Vardi, O Shamir
International Conference on Algorithmic Learning Theory, 1429-1459, 2023
582023
Implicit bias in leaky relu networks trained on high-dimensional data
S Frei, G Vardi, PL Bartlett, N Srebro, W Hu
arXiv preprint arXiv:2210.07082, 2022
532022
Benign overfitting in linear classifiers and leaky relu networks from kkt conditions for margin maximization
S Frei, G Vardi, P Bartlett, N Srebro
The Thirty Sixth Annual Conference on Learning Theory, 3173-3228, 2023
382023
From local pseudorandom generators to hardness of learning
A Daniely, G Vardi
Conference on Learning Theory, 1358-1394, 2021
362021
On the effective number of linear regions in shallow univariate relu networks: Convergence guarantees and implicit bias
I Safran, G Vardi, JD Lee
Advances in Neural Information Processing Systems 35, 32667-32679, 2022
342022
On margin maximization in linear and relu networks
G Vardi, O Shamir, N Srebro
Advances in Neural Information Processing Systems 35, 37024-37036, 2022
312022
On the optimal memorization power of relu neural networks
G Vardi, G Yehudai, O Shamir
arXiv preprint arXiv:2110.03187, 2021
302021
Gradient methods provably converge to non-robust networks
G Vardi, G Yehudai, O Shamir
Advances in Neural Information Processing Systems 35, 20921-20932, 2022
282022
Benign overfitting and grokking in relu networks for xor cluster data
Z Xu, Y Wang, S Frei, G Vardi, W Hu
arXiv preprint arXiv:2310.02541, 2023
272023
Size and depth separation in approximating benign functions with neural networks
G Vardi, D Reichman, T Pitassi, O Shamir
Conference on Learning Theory, 4195-4223, 2021
26*2021
Neural networks with small weights and depth-separation barriers
G Vardi, O Shamir
Advances in neural information processing systems 33, 19433-19442, 2020
262020
Learning a single neuron with bias using gradient descent
G Vardi, G Yehudai, O Shamir
Advances in Neural Information Processing Systems 34, 28690-28700, 2021
242021
Hardness of learning neural networks with natural weights
A Daniely, G Vardi
Advances in Neural Information Processing Systems 33, 930-940, 2020
242020
The double-edged sword of implicit bias: Generalization vs. robustness in relu networks
S Frei, G Vardi, P Bartlett, N Srebro
Advances in Neural Information Processing Systems 36, 2024
222024
On convexity and linear mode connectivity in neural networks
D Yunis, KK Patel, PHP Savarese, G Vardi, J Frankle, M Walter, K Livescu, ...
OPT 2022: Optimization for Machine Learning (NeurIPS 2022 Workshop), 2022
192022
Deconstructing data reconstruction: Multiclass, weight decay and general losses
G Buzaglo, N Haim, G Yehudai, G Vardi, Y Oz, Y Nikankin, M Irani
Advances in Neural Information Processing Systems 36, 2024
182024
Width is less important than depth in relu neural networks
G Vardi, G Yehudai, O Shamir
Conference on learning theory, 1249-1281, 2022
172022
The system can't perform the operation now. Try again later.
Articles 1–20