Follow
Mark Niklas Müller
Mark Niklas Müller
PhD Student, ETH Zurich
Verified email at inf.ethz.ch - Homepage
Title
Cited by
Cited by
Year
PRIMA: general and precise neural network certification via scalable convex hull approximations
MN Müller, G Makarchuk, G Singh, M Püschel, M Vechev
Proceedings of the ACM on Programming Languages 6 (POPL), 1-33, 2022
148*2022
Complete verification via multi-neuron relaxation guided branch-and-bound
C Ferrari, MN Muller, N Jovanovic, M Vechev
The Tenth International Conference on Learning Representations, 2022 (ICLR'22), 2022
1272022
First three years of the international verification of neural networks competition (VNN-COMP)
C Brix, MN Müller, S Bak, TT Johnson, C Liu
International Journal on Software Tools for Technology Transfer 25 (3), 329-339, 2023
952023
The third international verification of neural networks competition (vnn-comp 2022): summary and results
MN Müller, C Brix, S Bak, C Liu, TT Johnson
arXiv preprint arXiv:2212.10376, 2022
752022
Certified training: Small boxes are all you need
MN Müller, F Eckert, M Fischer, M Vechev
The Eleventh International Conference on Learning Representations (ICLR'23), 2022
622022
Boosting randomized smoothing with variance reduced classifiers
MZ Horváth, MN Müller, M Fischer, M Vechev
The Tenth International Conference on Learning Representations, 2022 (ICLR'22), 2021
522021
Taps: Connecting certified and adversarial training
Y Mao, MN Müller, M Fischer, M Vechev
Thirty-seventh Conference on Neural Information Processing Systems (NeurIPS'23), 2023
26*2023
SWT-bench: Testing and validating real-world bug-fixes with code agents
N Mündler, M Müller, J He, M Vechev
Advances in Neural Information Processing Systems 37, 81857-81887, 2024
18*2024
Robust and Accurate--Compositional Architectures for Randomized Smoothing
MZ Horváth, MN Müller, M Fischer, M Vechev
arXiv preprint arXiv:2204.00487, 2022
172022
Certify or predict: Boosting certified robustness with compositional architectures
MN Müller, M Balunović, M Vechev
The Ninth International Conference on Learning Representations, 2021 (ICLR'21), 2021
152021
Evading data contamination detection for language models is (too) easy
J Dekoninck, MN Müller, M Baader, M Fischer, M Vechev
arXiv preprint arXiv:2402.02823, 2024
142024
Abstract interpretation of fixpoint iterators with applications to neural networks
MN Müller, M Fischer, R Staab, M Vechev
Proceedings of the ACM on Programming Languages 7 (PLDI), 786-810, 2023
14*2023
Understanding certified training with interval bound propagation
Y Mao, MN Müller, M Fischer, M Vechev
arXiv preprint arXiv:2306.10426, 2023
132023
Certified robustness to data poisoning in gradient-based training
P Sosnin, MN Müller, M Baader, C Tsay, M Wicker
arXiv preprint arXiv:2406.05670, 2024
92024
Mitigating catastrophic forgetting in language transfer via model merging
A Alexandrov, V Raychev, MN Müller, C Zhang, M Vechev, K Toutanova
arXiv preprint arXiv:2407.08699, 2024
82024
Spear: Exact gradient inversion of batches in federated learning
DI Dimitrov, M Baader, M Müller, M Vechev
Advances in Neural Information Processing Systems 37, 106768-106799, 2024
62024
Expressivity of ReLU-Networks under Convex Relaxations
M Baader, MN Müller, Y Mao, M Vechev
arXiv preprint arXiv:2311.04015, 2023
62023
The third international verification of neural networks competition (VNN-COMP 2022): summary and results (2022)
MN Müller, C Brix, S Bak, C Liu, TT Johnson
URL https://arxiv. org/abs/2212.10376, 2022
62022
Constat: Performance-based contamination detection in large language models
J Dekoninck, M Müller, M Vechev
Advances in Neural Information Processing Systems 37, 92420-92464, 2024
52024
Efficient Certified Training and Robustness Verification of Neural ODEs
M Zeqiri, MN Müller, M Fischer, M Vechev
The Eleventh International Conference on Learning Representations (ICLR'23), 2023
5*2023
The system can't perform the operation now. Try again later.
Articles 1–20