Folgen
Yangyang Shi
Yangyang Shi
Meta
Bestätigte E-Mail-Adresse bei fb.com
Titel
Zitiert von
Zitiert von
Jahr
Recurrent neural networks for language understanding.
K Yao, G Zweig, MY Hwang, Y Shi, D Yu
In Fourteenth Annual Conference of the International Speech Communication …, 2013
4032013
Spoken language understanding using long short-term memory neural networks
K Yao, B Peng, Y Zhang, D Yu, G Zweig, Y Shi
2014 IEEE Spoken Language Technology Workshop (SLT), 189-194, 2014
3972014
Torchaudio: Building blocks for audio and speech processing
YY Yang, M Hira, Z Ni, A Astafurov, C Chen, C Puhrsch, D Pollack, ...
ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and …, 2022
1542022
Emformer: Efficient memory transformer based acoustic model for low latency streaming speech recognition
Y Shi, Y Wang, C Wu, CF Yeh, J Chan, F Zhang, D Le, M Seltzer
ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and …, 2021
1542021
Contextual Spoken Language Understanding Using Recurrent Neural Networks
Y Shi, H Yao, Kaisheng, Chen, YC Pan, MY Hwang, B Peng
IEEE International Conference on Acoustics, Speech and Signal Processing, 2015
882015
Llm-qat: Data-free quantization aware training for large language models
Z Liu, B Oguz, C Zhao, E Chang, P Stock, Y Mehdad, Y Shi, ...
arXiv preprint arXiv:2305.17888, 2023
762023
Deep lstm based feature mapping for query classification
Y Shi, K Yao, L Tian, D Jiang
Proceedings of the 2016 Conference of the North American Chapter of the …, 2016
682016
Contextualized streaming end-to-end speech recognition with trie-based deep biasing and shallow fusion
D Le, M Jain, G Keren, S Kim, Y Shi, J Mahadeokar, J Chan, ...
arXiv preprint arXiv:2104.02194, 2021
672021
Streaming transformer-based acoustic models using self-attention with augmented memory
C Wu, Y Wang, Y Shi, CF Yeh, F Zhang
arXiv preprint arXiv:2005.08042, 2020
672020
Recurrent neural network language model adaptation with curriculum learning
Y Shi, M Larson, CM Jonker
Computer Speech & Language 33 (1), 136-154, 2015
492015
Towards recurrent neural networks language models with linguistic and contextual features
Y Shi, P Wiggers, CM Jonker
Thirteenth annual conference of the international speech communication …, 2012
492012
Weak-attention suppression for transformer based speech recognition
Y Shi, Y Wang, C Wu, C Fuegen, F Zhang, D Le, CF Yeh, ML Seltzer
arXiv preprint arXiv:2005.09137, 2020
272020
Knowledge distillation for recurrent neural network language modeling with trust regularization
Y Shi, MY Hwang, X Lei, H Sheng
ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and …, 2019
262019
Dissecting user-perceived latency of on-device E2E speech recognition
Y Shangguan, R Prabhavalkar, H Su, J Mahadeokar, Y Shi, J Zhou, C Wu, ...
arXiv preprint arXiv:2104.02207, 2021
242021
Mining effective negative training samples for keyword spotting
J Hou, Y Shi, M Ostendorf, MY Hwang, L Xie
ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and …, 2020
242020
Higher order iteration schemes for unconstrained optimization
Y Shi, P Pan
American Journal of Operations Research 1 (03), 73, 2011
242011
Region proposal network based small-footprint keyword spotting
J Hou, Y Shi, M Ostendorf, MY Hwang, L Xie
IEEE Signal Processing Letters 26 (10), 1471-1475, 2019
222019
Transformer in action: a comparative study of transformer-based acoustic models for large scale speech recognition applications
Y Wang, Y Shi, F Zhang, C Wu, J Chan, CF Yeh, A Xiao
ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and …, 2021
172021
Evaluations of interventions using mathematical models with exponential and non-exponential distributions for disease stages: the case of Ebola
X Wang, Y Shi, Z Feng, J Cui
Bulletin of mathematical biology 79, 2149-2173, 2017
152017
Recurrent Support Vector Machines For Slot Tagging In Spoken Language Understanding.
Y Shi, K Yao, H Chen, D Yu, YC Pan, MY Hwang
Proceedings of the 2016 Conference of the North American Chapter of the …, 2016
152016
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–20