Follow
Chawin Sitawarin
Chawin Sitawarin
Postdoctoral Researcher @ Meta
Verified email at meta.com - Homepage
Title
Cited by
Cited by
Year
Enhancing robustness of machine learning systems via data transformations
AN Bhagoji, D Cullina, C Sitawarin, P Mittal
2018 52nd Annual Conference on Information Sciences and Systems (CISS), 1-5, 2018
424*2018
Darts: Deceiving autonomous cars with toxic signs
C Sitawarin, AN Bhagoji, A Mosenia, M Chiang, P Mittal
arXiv preprint arXiv:1802.06430, 2018
361*2018
Analyzing the robustness of open-world machine learning
V Sehwag, AN Bhagoji, L Song, C Sitawarin, D Cullina, M Chiang, P Mittal
Proceedings of the 12th ACM Workshop on Artificial Intelligence and Security …, 2019
902019
Beyond grand theft auto V for training, testing and enhancing deep learning in self driving cars
M Martinez, C Sitawarin, K Finch, L Meincke, A Yablonski, A Kornhauser
arXiv preprint arXiv:1712.01397, 2017
762017
Sat: Improving adversarial training via curriculum-based loss smoothing
C Sitawarin, S Chakraborty, D Wagner
Proceedings of the 14th ACM Workshop on Artificial Intelligence and Security …, 2021
69*2021
Inverse-designed photonic fibers and metasurfaces for nonlinear frequency conversion
C Sitawarin, W Jin, Z Lin, AW Rodriguez
Photonics Research 6 (5), B82-B89, 2018
64*2018
On the robustness of deep k-nearest neighbors
C Sitawarin, D Wagner
2019 IEEE Security and Privacy Workshops (SPW), 1-7, 2019
62*2019
Defending against adversarial examples with k-nearest neighbor
C Sitawarin, D Wagner
arXiv preprint arXiv:1906.09525, 2019
302019
Jatmo: Prompt injection defense by task-specific finetuning
J Piet, M Alrashed, C Sitawarin, S Chen, Z Wei, E Sun, B Alomair, ...
Computer Security – ESORICS 2024, 2024
202024
Minimum-norm adversarial examples on KNN and KNN based models
C Sitawarin, D Wagner
2020 IEEE Security and Privacy Workshops (SPW), 34-40, 2020
202020
Better the devil you know: An analysis of evasion attacks using out-of-distribution adversarial examples
V Sehwag, AN Bhagoji, L Song, C Sitawarin, D Cullina, M Chiang, P Mittal
arXiv preprint arXiv:1905.01726, 2019
202019
Demystifying the adversarial robustness of random transformation defenses
C Sitawarin, ZJ Golan-Strieb, D Wagner
International Conference on Machine Learning, 20232-20252, 2022
182022
StruQ: Defending against prompt injection with structured queries
S Chen, J Piet, C Sitawarin, D Wagner
arXiv preprint arXiv:2402.06363, 2024
152024
Pal: Proxy-guided black-box attack on large language models
C Sitawarin, N Mu, D Wagner, A Araujo
arXiv preprint arXiv:2402.09674, 2024
132024
Mark my words: Analyzing and evaluating language model watermarks
J Piet, C Sitawarin, V Fang, N Mu, D Wagner
arXiv preprint arXiv:2312.00273, 2023
102023
Part-Based Models Improve Adversarial Robustness
C Sitawarin, K Pongmala, Y Chen, N Carlini, D Wagner
The Eleventh International Conference on Learning Representations, 2023
102023
REAP: A Large-Scale Realistic Adversarial Patch Benchmark
N Hingun, C Sitawarin, J Li, D Wagner
Proceedings of the IEEE/CVF international conference on computer vision (ICCV), 2023
82023
Not all pixels are born equal: An analysis of evasion attacks under locality constraints
V Sehwag, C Sitawarin, AN Bhagoji, A Mosenia, M Chiang, P Mittal
Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications …, 2018
82018
Preprocessors Matter! Realistic Decision-Based Attacks on Machine Learning Systems
C Sitawarin, F Tramèr, N Carlini
Proceedings of the 40th International Conference on Machine Learning 202 …, 2023
72023
Vulnerability detection with code language models: How far are we?
Y Ding, Y Fu, O Ibrahim, C Sitawarin, X Chen, B Alomair, D Wagner, ...
arXiv preprint arXiv:2403.18624, 2024
62024
The system can't perform the operation now. Try again later.
Articles 1–20