Suivre
Ernie Chang
Ernie Chang
Research Scientist, Meta
Adresse e-mail validée de fb.com
Titre
Citée par
Citée par
Année
Beyond the imitation game: Quantifying and extrapolating the capabilities of language models
A Srivastava, A Rastogi, A Rao, AAM Shoeb, A Abid, A Fisch, AR Brown, ...
arXiv preprint arXiv:2206.04615, 2022
6932022
Llm-qat: Data-free quantization aware training for large language models
Z Liu, B Oguz, C Zhao, E Chang, P Stock, Y Mehdad, Y Shi, ...
arXiv preprint arXiv:2305.17888, 2023
642023
Neural Data-to-Text Generation via Jointly Learning the Segmentation and Correspondence
X Shen, E Chang, H Su, J Zhou, D Klakow
arXiv preprint arXiv:2005.01096, 2020
572020
Neural Data-to-Text Generation via Jointly Learning the Segmentation and Correspondence
X Shen, E Chang, H Su, J Zhou, D Klakow
Proceedings of ACL 2020, 2020
572020
Neural Data-to-text Generation with LM-based Text Augmentation
E Chang, X Shen, D Zhu, V Demberg, H Su
Proceedings of EACL 2021, 2021
422021
On Training Instance Selection for Few-Shot Neural Text Generation
E Chang, X Shen, HS Yeh, V Demberg
Proceedings of ACL 2021, 2021
322021
MovieChats: Chat like Humans in a Closed Domain
H Su, X Shen, Z Xiao, Z Zhang, E Chang, C Zhang, C Niu, J Zhou
Proceedings of EMNLP 2020, 6605-6619, 2020
282020
Generating e-commerce product titles and predicting their quality
JGC de Souza, M Kozielski, P Mathur, E Chang, M Guerini, M Negri, ...
Proceedings of INLG, 233-243, 2018
272018
A few thousand translations go a long way! leveraging pre-trained models for african news translation
DI Adelani, JO Alabi, A Fan, J Kreutzer, X Shen, M Reid, D Ruiter, ...
arXiv preprint arXiv:2205.02022, 2022
262022
Does the Order of Training Samples Matter? Improving Neural Data-to-Text Generation with Curriculum Learning
E Chang, HS Yeh, V Demberg
Proceedings of EACL 2021, 2021
252021
Jointly Improving Language Understanding and Generation with Quality-Weighted Weak Supervision of Automatic Labeling
E Chang, V Demberg, A Marin
Proceedings of EACL 2021, 2021
232021
Unsupervised Pidgin Text Generation By Pivoting English Data and Self-Training
E Chang, D Adelani, X Shen, V Demberg
In Proceedings of Workshop at ICLR, 2020
212020
Neobility at SemEval-2017 Task 1: An attention-based sentence similarity model.
WL Zhuang, E Chang
In Proceedings of SemEval-2017 at ACL 2017., 2017
192017
DART: A Lightweight Quality-Suggestive Data-to-Text Annotation Tool
E Chang, J Caplinger, A Marin, X Shen, V Demberg
Proceedings of COLING 2020 (Best Demo Paper Award), 12-17, 2020
182020
DART: A Lightweight Quality-Suggestive Data-to-Text Annotation Tool
E Chang, J Caplinger, A Marin, X Shen, V Demberg
arXiv preprint arXiv:2010.04141, 2020
182020
Improving language generation from feature-rich tree-structured data with relational graph convolutional encoders
X Hong, E Chang, V Demberg
Proceedings of the 2nd Workshop on Multilingual Surface Realisation (MSR …, 2019
132019
Mdia: A benchmark for multilingual dialogue generation in 46 languages
Q Zhang, X Shen, E Chang, J Ge, P Chen
arXiv preprint arXiv:2208.13078, 2022
102022
Guyo Jarso, Oreen Yousuf, Andre Niyongabo Rubungo, Gilles Hacheme, Eric Peter Wairagala, Muhammad Umair Nasir
D Adelani, J Alabi, A Fan, J Kreutzer, X Shen, M Reid, D Ruiter, D Klakow, ...
102022
The SelectGen Challenge: Finding the Best Training Samples for Few-Shot Neural Text Generation
E Chang, X Shen, A Marin, V Demberg
Proceedings of the 14th INLG, https://aclanthology.org/2021.inlg-1.36/, 2021
82021
Time-Aware Ancient Chinese Text Translation and Inference
E Chang, YT Shiue, HS Yeh, V Demberg
LChange @ ACL 2021, 2021
82021
Le système ne peut pas réaliser cette opération maintenant. Veuillez réessayer plus tard.
Articles 1–20