Matthew Jagielski
Matthew Jagielski
Verified email at - Homepage
Cited by
Cited by
Manipulating machine learning: Poisoning attacks and countermeasures for regression learning
M Jagielski, A Oprea, B Biggio, C Liu, C Nita-Rotaru, B Li
2018 IEEE Symposium on Security and Privacy (SP), 19-35, 2018
Why Do Adversarial Attacks Transfer? Explaining Transferability of Evasion and Poisoning Attacks
A Demontis, M Melis, M Pintor, M Jagielski, B Biggio, A Oprea, ...
28th {USENIX} Security Symposium ({USENIX} Security 19), 321-338, 2019
High Accuracy and High Fidelity Extraction of Neural Networks
M Jagielski, N Carlini, D Berthelot, A Kurakin, N Papernot
29th {USENIX} Security Symposium ({USENIX} Security 20), 2020
Differentially private fair learning
M Jagielski, M Kearns, J Mao, A Oprea, A Roth, S Sharifi-Malvajerdi, ...
International Conference on Machine Learning, 3000-3008, 2019
Threat Detection for Collaborative Adaptive Cruise Control in Connected Cars
M Jagielski, N Jones, CW Lin, C Nita-Rotaru, S Shiraishi
Proceedings of the 11th ACM Conference on Security & Privacy in Wireless and …, 2018
Secure Communication Channel Establishment: TLS 1.3 (over TCP Fast Open) vs. QUIC
S Chen, S Jero, M Jagielski, A Boldyreva, C Nita-Rotaru
European Symposium on Research in Computer Security, 404-426, 2019
Network and system level security in connected vehicle applications
H Liang, M Jagielski, B Zheng, CW Lin, E Kang, S Shiraishi, C Nita-Rotaru, ...
2018 IEEE/ACM International Conference on Computer-Aided Design (ICCAD), 1-7, 2018
Cryptanalytic Extraction of Neural Network Models
N Carlini, M Jagielski, I Mironov
arXiv preprint arXiv:2003.04884, 2020
Subpopulation Data Poisoning Attacks
M Jagielski, G Severi, NP Harger, A Oprea
arXiv preprint arXiv:2006.14026, 2020
Auditing Differentially Private Machine Learning: How Private is Private SGD?
M Jagielski, J Ullman, A Oprea
arXiv preprint arXiv:2006.07709, 2020
The system can't perform the operation now. Try again later.
Articles 1–10