Xuedong Shang
Xuedong Shang
INRIA (SequeL -> SCOOL)
Adresse e-mail validée de inria.fr - Page d'accueil
Titre
Citée par
Citée par
Année
Gamification of pure exploration for linear bandits
R Degenne, P Ménard, X Shang, M Valko
37th International Conference on Machine Learning, 2020
212020
General parallel optimization without a metric
X Shang, E Kaufmann, M Valko
30th International Conference on Algorithmic Learning Theory, 2019
182019
A simple dynamic bandit algorithm for hyper-parameter tuning
X Shang, E Kaufmann, M Valko
6th ICML Workshop on Automated Machine Learning, 2019
72019
Adaptive black-box optimization got easier: HCT only needs local smoothness
X Shang, E Kaufmann, M Valko
14th European Workshop on Reinforcement Learning, 2018
72018
Fixed-confidence guarantees for Bayesian best-arm identification
X Shang, R de Heide, E Kaufmann, P Ménard, M Valko
23rd International Conference on Artificial Intelligence and Statistics, 2020
52020
UCB momentum Q-learning: Correcting the bias without forgetting
P Ménard, OD Domingues, X Shang, M Valko
38th International Conference on Machine Learning, 2021
32021
Time series clustering
D Barbe, A Debant, X Shang
https://xuedong.github.io/static/documents/time_series.pdf, 2016
22016
rlberry - A reinforcement learning library for research and education
rlberry Team
GitHub Repository, 2021
2021
Stochastic bandits with vector losses: Minimizing -norm of relative losses
X Shang, H Shao, J Qian
arXiv preprint arXiv:2010.08061, 2020
2020
Simple (dynamic) bandit algorithms for hyper-parameter optimization
X Shang, E Kaufmann, M Valko
https://xuedong.github.io/static/documents/dttts.pdf, 2019
2019
Hierarchical bandits for black-box optimization and Monte-Carlo tree search
X Shang
Team SequeL, Inria Lille - Nord Europe, 2017
2017
Optimal transport geometry for sentiment analysis
X Shang
Yamamoto-Cuturi Lab., Graduate School of Informatics, Kyoto University, 2016
2016
Recommandation personnalisée
X Shang
Team Magnet, Inria Lille - Nord Europe, 2015
2015
Le système ne peut pas réaliser cette opération maintenant. Veuillez réessayer plus tard.
Articles 1–13