Language models are few-shot learners TB Brown arXiv preprint arXiv:2005.14165, 2020 | 38724 | 2020 |
Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray TB Brown, B Mann, N Ryder, M Subbiah, J Kaplan, P Dhariwal, ... Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford …, 2020 | 9301 | 2020 |
Gpt-4 technical report J Achiam, S Adler, S Agarwal, L Ahmad, I Akkaya, FL Aleman, D Almeida, ... arXiv preprint arXiv:2303.08774, 2023 | 7242 | 2023 |
GPT-4 technical report R OpenAI ArXiv 2303, 08774, 2023 | 1512 | 2023 |
Adding gradient noise improves learning for very deep networks A Neelakantan, L Vilnis, QV Le, I Sutskever, L Kaiser, K Kurach, J Martens International Conference on Learning Representations Workshop (ICLR Workshop …, 2015 | 642 | 2015 |
Efficient non-parametric estimation of multiple embeddings per word in vector space A Neelakantan, J Shankar, A Passos, A McCallum Conference on Empirical Methods in Natural Language Processing, 2014, 2015 | 624 | 2015 |
Text and code embeddings by contrastive pre-training A Neelakantan, T Xu, R Puri, A Radford, JM Han, J Tworek, Q Yuan, ... arXiv preprint arXiv:2201.10005, 2022 | 427 | 2022 |
Compositional vector space models for knowledge base completion A Neelakantan, B Roth, A McCallum arXiv preprint arXiv:1504.06662, 2015 | 359 | 2015 |
Chains of reasoning over entities, relations, and text using recurrent neural networks R Das, A Neelakantan, D Belanger, A McCallum European Chapter of the Association for Computational Linguistics (EACL), 2017., 2016 | 340 | 2016 |
Neural programmer: Inducing latent programs with gradient descent A Neelakantan, QV Le, I Sutskever International Conference on Learning Representations (ICLR), 2016, 2015 | 296 | 2015 |
Language Models are Few-Shot Learners. 2020. doi: 10.48550 TB Brown, B Mann, N Ryder, M Subbiah, J Kaplan, P Dhariwal, ... arxiv, 5-7, 2005 | 252 | 2005 |
Taskmaster-1: Toward a realistic and diverse dialog dataset B Byrne, K Krishnamoorthi, C Sankar, A Neelakantan, D Duckworth, ... arXiv preprint arXiv:1909.05358, 2019 | 234 | 2019 |
& Amodei, D.(2020) TB Brown, B Mann, N Ryder, M Subbiah, J Kaplan, P Dhariwal, ... Language models are few-shot learners, 2005 | 185 | 2005 |
Language models are few-shot learners B Mann, N Ryder, M Subbiah, J Kaplan, P Dhariwal, A Neelakantan, ... arXiv preprint arXiv:2005.14165 1, 2020 | 179 | 2020 |
Learning a natural language interface with neural programmer A Neelakantan, QV Le, M Abadi, A McCallum, D Amodei International Conference on Learning Representations (ICLR), 2017., 2016 | 141 | 2016 |
Language models are few-shot learners (arXiv: 2005.14165). arXiv TB Brown, B Mann, N Ryder, M Subbiah, J Kaplan, P Dhariwal, ... | 127 | 2005 |
Theory and experiments on vector quantized autoencoders A Roy, A Vaswani, A Neelakantan, N Parmar arXiv preprint arXiv:1805.11063, 2018 | 100 | 2018 |
Inferring Missing Entity Type Instances for Knowledge Base Completion: New Dataset and Methods A Neelakantan, MW Chang The North American Chapter of the Association for Computational Linguistics …, 2015 | 97 | 2015 |
Trading off diversity and quality in natural language generation H Zhang, D Duckworth, D Ippolito, A Neelakantan arXiv preprint arXiv:2004.10450, 2020 | 91 | 2020 |
Predicting the impact of scientific concepts using full‐text features K McKeown, H Daume III, S Chaturvedi, J Paparrizos, K Thadani, P Barrio, ... Journal of the Association for Information Science and Technology 67 (11 …, 2016 | 85 | 2016 |