Folgen
Guolin Ke
Guolin Ke
DP Technology
Bestätigte E-Mail-Adresse bei dp.tech - Startseite
Titel
Zitiert von
Zitiert von
Jahr
Lightgbm: A highly efficient gradient boosting decision tree
G Ke, Q Meng, T Finley, T Wang, W Chen, W Ma, Q Ye, TY Liu
Advances in neural information processing systems 30, 2017
167752017
Do transformers really perform badly for graph representation?
C Ying, T Cai, S Luo, S Zheng, G Ke, D He, Y Shen, TY Liu
Advances in neural information processing systems 34, 28877-28888, 2021
15412021
Deep subdomain adaptation network for image classification
Y Zhu, F Zhuang, J Wang, G Ke, J Chen, J Bian, H Xiong, Q He
IEEE transactions on neural networks and learning systems 32 (4), 1713-1722, 2020
9762020
Uni-Mol: A Universal 3D Molecular Representation Learning Framework
G Zhou, Z Gao, Q Ding, H Zheng, H Xu, Z Wei, L Zhang, G Ke
International Conference on Learning Representations, 2023
3532023
Rethinking Positional Encoding in Language Pre-training
G Ke, D He, TY Liu
International Conference on Learning Representations (ICLR) 2021, 2020
3482020
Invertible image rescaling
M Xiao, S Zheng, C Liu, Y Wang, D He, G Ke, J Bian, Z Lin, TY Liu
Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23 …, 2020
2812020
DeepGBM: A deep learning framework distilled by GBDT for online prediction tasks
G Ke, Z Xu, J Zhang, J Bian, TY Liu
Proceedings of the 25th ACM SIGKDD International Conference on Knowledge …, 2019
1912019
A communication-efficient parallel algorithm for decision tree
Q Meng, G Ke, T Wang, W Chen, Q Ye, ZM Ma, TY Liu
Advances in Neural Information Processing Systems, 1279-1287, 2016
1812016
Less is more: Pre-train a strong text encoder for dense retrieval using a weak decoder
S Lu, D He, C Xiong, G Ke, W Malik, Z Dou, P Bennett, T Liu, A Overwijk
arXiv preprint arXiv:2102.09206, 2021
912021
Benchmarking graphormer on large-scale molecular modeling datasets
Y Shi, S Zheng, G Ke, Y Shen, J You, J He, S Luo, C Liu, D He, TY Liu
arXiv preprint arXiv:2203.04810, 2022
752022
How could neural networks understand programs?
D Peng, S Zheng, Y Li, G Ke, D He, TY Liu
International Conference on Machine Learning, 8476-8486, 2021
692021
Stable, fast and accurate: Kernelized attention with relative positional encoding
S Luo, S Li, T Cai, D He, D Peng, S Zheng, G Ke, L Wang, TY Liu
Advances in Neural Information Processing Systems 34, 22795-22807, 2021
532021
Uni-Fold: an open-source platform for developing protein folding models beyond AlphaFold
Z Li, X Liu, W Chen, F Shen, H Bi, G Ke, L Zhang
bioRxiv, 2022.08. 04.502811, 2022
452022
Metro: Efficient denoising pretraining of large scale autoencoding language models with model generated signals
P Bajaj, C Xiong, G Ke, X Liu, D He, S Tiwary, TY Liu, P Bennett, X Song, ...
arXiv preprint arXiv:2204.06644, 2022
392022
DPA-2: Towards a universal large atomic model for molecular and material simulation
D Zhang, X Liu, X Zhang, C Zhang, C Cai, H Bi, Y Du, X Qin, J Huang, B Li, ...
arXiv e-prints, arXiv: 2312.15492, 2023
38*2023
TabNN: A universal neural network solution for tabular data
G Ke, J Zhang, Z Xu, J Bian, TY Liu
38*2018
A comprehensive transformer-based approach for high-accuracy gas adsorption predictions in metal-organic frameworks
J Wang, J Liu, H Wang, M Zhou, G Ke, L Zhang, J Wu, Z Gao, D Lu
Nature Communications 15 (1), 1904, 2024
372024
Do deep learning models really outperform traditional approaches in molecular docking?
Y Yu, S Lu, Z Gao, H Zheng, G Ke
arXiv preprint arXiv:2302.07134, 2023
372023
Highly accurate quantum chemical property prediction with uni-mol+
S Lu, Z Gao, D He, L Zhang, G Ke
arXiv preprint arXiv:2303.16982, 2023
35*2023
Quantized training of gradient boosting decision trees
Y Shi, G Ke, Z Chen, S Zheng, TY Liu
Advances in neural information processing systems 35, 18822-18833, 2022
302022
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–20