The cot collection: Improving zero-shot and few-shot learning of language models via chain-of-thought fine-tuning S Kim, SJ Joo, D Kim, J Jang, S Ye, J Shin, M Seo arXiv preprint arXiv:2305.14045, 2023 | 27 | 2023 |
Mind the gap! injecting commonsense knowledge for abstractive dialogue summarization S Kim, SJ Joo, H Chae, C Kim, S Hwang, J Yeo arXiv preprint arXiv:2209.00930, 2022 | 14 | 2022 |
Cotever: Chain of thought prompting annotation toolkit for explanation verification S Kim, SJ Joo, Y Jang, H Chae, J Yeo arXiv preprint arXiv:2303.03628, 2023 | 5 | 2023 |
How Well Do Large Language Models Truly Ground? H Lee, S Joo, C Kim, J Jang, D Kim, KW On, M Seo arXiv preprint arXiv:2311.09069, 2023 | 2 | 2023 |
Semiparametric Token-Sequence Co-Supervision H Lee, D Kim, J Jun, S Joo, J Jang, KW On, M Seo arXiv preprint arXiv:2403.09024, 2024 | | 2024 |
자연어 처리를 위한 조건부 게이트 다층 퍼셉트론 모델 개발 및 구현 손규진, 김승원, 주세준, 조우진, 나정은 한국정보처리학회 학술대회논문집 28 (2), 1116-1119, 2021 | | 2021 |