Exploring the limits of transfer learning with a unified text-to-text transformer C Raffel, N Shazeer, A Roberts, K Lee, S Narang, M Matena, Y Zhou, W Li, ... Journal of machine learning research 21 (140), 1-67, 2020 | 19216 | 2020 |
Lamda: Language models for dialog applications R Thoppilan, D De Freitas, J Hall, N Shazeer, A Kulshreshtha, HT Cheng, ... arXiv preprint arXiv:2201.08239, 2022 | 1508 | 2022 |
Deep learning scaling is predictable, empirically J Hestness, S Narang, N Ardalani, G Diamos, H Jun, H Kianinejad, ... arXiv preprint arXiv:1712.00409, 2017 | 768 | 2017 |
Deep Voice 2: Multi-Speaker Neural Text-to-Speech YZ Sercan Arik, Gregory Diamos, Andrew Gibiansky, John Miller, Kainan Peng ... Neural Information Processing Systems (NIPS), 2017 | 647* | 2017 |
Glam: Efficient scaling of language models with mixture-of-experts N Du, Y Huang, AM Dai, S Tong, D Lepikhin, Y Xu, M Krikun, Y Zhou, ... International Conference on Machine Learning, 5547-5569, 2022 | 527 | 2022 |
Neural voice cloning with a few samples S Arik, J Chen, K Peng, W Ping, Y Zhou Advances in neural information processing systems 31, 2018 | 458 | 2018 |
OpenPiton: An open source manycore research framework J Balkind, M McKeown, Y Fu, T Nguyen, Y Zhou, A Lavrov, M Shahrad, ... ACM SIGPLAN Notices 51 (4), 217-232, 2016 | 280 | 2016 |
Mixture-of-experts with expert choice routing Y Zhou, T Lei, H Liu, N Du, Y Huang, V Zhao, AM Dai, QV Le, J Laudon Advances in Neural Information Processing Systems 35, 7103-7114, 2022 | 229 | 2022 |
Toju Duke, Lucas Dixon, Kun Zhang, Quoc V N Du, Y Huang, AM Dai, S Tong, D Lepikhin, Y Xu, M Krikun, Y Zhou, ... Le, Yonghui Wu, Zhifeng Chen, and Claire Cui, 2021 | 156 | 2021 |
Atomic In-place Updates for Non-volatile Main Memories with Kamino-Tx A Memaripour, A Badam, A Phanishayee, Y Zhou, R Alagappan, ... EuroSys '17 Proceedings of the Twelfth European Conference on Computer …, 2017 | 128 | 2017 |
Renelito Delos Santos R Thoppilan, D De Freitas, J Hall, N Shazeer, A Kulshreshtha, HT Cheng, ... | 102 | 2022 |
Do transformer modifications transfer across implementations and applications? S Narang, HW Chung, Y Tay, W Fedus, T Fevry, M Matena, K Malkan, ... arXiv preprint arXiv:2102.11972, 2021 | 98 | 2021 |
Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv C Raffel, N Shazeer, A Roberts, K Lee, S Narang, M Matena, Y Zhou, W Li, ... arXiv preprint arXiv:1910.10683, 2019 | 91 | 2019 |
A learned performance model for tensor processing units S Kaufman, P Phothilimthana, Y Zhou, C Mendis, S Roy, A Sabne, ... Proceedings of Machine Learning and Systems 3, 387-400, 2021 | 81 | 2021 |
Resource-efficient neural architect Y Zhou, S Ebrahimi, SÖ Arık, H Yu, H Liu, G Diamos arXiv preprint arXiv:1806.07912, 2018 | 78 | 2018 |
MITTS: Memory inter-arrival time traffic shaping Y Zhou, D Wentzlaff ACM SIGARCH Computer Architecture News 44 (3), 532-544, 2016 | 63 | 2016 |
T5: Exploring the limits of transfer learning with a unified text-to-text transformer C Raffel, N Shazeer, A Roberts, K Lee, S Narang, M Matena, Y Zhou, W Li, ... Journal of Machine Learning Research 21, 1-67, 2020 | 61 | 2020 |
Deep learning scaling is predictable J Hestness, S Narang, N Ardalani, G Diamos, H Jun, H Kianinejad, ... Empirically. arXiv 1712, 2, 2017 | 58 | 2017 |
Exploring the limits of transfer learning with a unified text-to-text transformer A Roberts, C Raffel, K Lee, M Matena, N Shazeer, PJ Liu, S Narang, W Li, ... Google, Tech. Rep., 2019 | 57 | 2019 |
Power and Energy Characterization of an Open Source 25-Core Manycore Processor. M McKeown, A Lavrov, M Shahrad, PJ Jackson, Y Fu, J Balkind, ... HPCA, 762-775, 2018 | 57 | 2018 |