Follow
Yuchen Li
Title
Cited by
Cited by
Year
How Do Transformers Learn Topic Structure: Towards a Mechanistic Understanding
Y Li, Y Li, A Risteski
International Conference on Machine Learning (ICML), 2023
522023
Context-sensitive malicious spelling error correction
H Gong, Y Li, S Bhat, P Viswanath
The World Wide Web Conference (WWW), 2771-2777, 2019
302019
Temporal motifs in heterogeneous information networks
Y Li*, Z Lou*, Y Shi, J Han
MLG Workshop@ KDD, 2018
302018
Contrasting the landscape of contrastive and non-contrastive learning
A Pokle*, J Tian*, Y Li*, A Risteski
Conference on Artificial Intelligence and Statistics (AISTATS), 2022
252022
Discovering Hypernymy in Text-Rich Heterogeneous Information Network by Exploiting Context Granularity
Y Shi*, J Shen*, Y Li, N Zhang, X He, Z Lou, Q Zhu, M Walker, M Kim, ...
International Conference on Information and Knowledge Management (CIKM), 599-608, 2019
192019
Transformers are uninterpretable with myopic methods: a case study with bounded Dyck grammars
K Wen, Y Li, B Liu, A Risteski
Neural Information Processing Systems (NeurIPS), 2023
13*2023
The Limitations of Limited Context for Constituency Parsing
Y Li, A Risteski
Association for Computational Linguistics (ACL) 1, 2675--2687, 2021
42021
Complexity of Leading Digit Sequences
X He*, AJ Hildebrand*, Y Li*, Y Zhang*
Discrete Mathematics & Theoretical Computer Science 22, 2020
32020
The system can't perform the operation now. Try again later.
Articles 1–8