Matthieu Geist
Matthieu Geist
Google Brain (on leave of Professor, Université de Lorraine)
Verified email at
Cited by
Cited by
Sample-efficient batch reinforcement learning for dialogue management optimization
O Pietquin, M Geist, S Chandramohan, H Frezza-Buet
ACM Transactions on Speech and Language Processing (TSLP) 7 (3), 1-21, 2011
Kalman temporal differences
M Geist, O Pietquin
Journal of artificial intelligence research 39, 483-532, 2010
Algorithmic survey of parametric value function approximation
M Geist, O Pietquin
IEEE Transactions on Neural Networks and Learning Systems 24 (6), 845-867, 2013
User simulation in dialogue systems using inverse reinforcement learning
S Chandramohan, M Geist, F Lefevre, O Pietquin
Inverse reinforcement learning through structured classification
E Klein, M Geist, B Piot, O Pietquin
Advances in neural information processing systems, 1007-1015, 2012
Off-policy learning with eligibility traces: A survey
M Geist, B Scherrer
arXiv preprint arXiv:1304.3999, 2013
Laugh-aware virtual agent and its impact on user amusement
R Niewiadomski, J Hofmann, J Urbain, T Platt, J Wagner, P Bilal, T Ito, ...
University of Zurich, 2013
Human activity recognition using recurrent neural networks
D Singh, E Merdivan, I Psychoula, J Kropf, S Hanke, M Geist, A Holzinger
International Cross-Domain Conference for Machine Learning and Knowledge …, 2017
Approximate modified policy iteration and its application to the game of Tetris.
B Scherrer, M Ghavamzadeh, V Gabillon, B Lesner, M Geist
J. Mach. Learn. Res. 16, 1629-1676, 2015
A comprehensive reinforcement learning framework for dialogue management optimization
L Daubigney, M Geist, S Chandramohan, O Pietquin
IEEE Journal of Selected Topics in Signal Processing 6 (8), 891-902, 2012
A theory of regularized markov decision processes
M Geist, B Scherrer, O Pietquin
arXiv preprint arXiv:1901.11275, 2019
Sample efficient on-line learning of optimal dialogue policies with kalman temporal differences
O Pietquin, M Geist, S Chandramohan
Twenty-Second International Joint Conference on Artificial Intelligence, 2011
Boosted bellman residual minimization handling expert demonstrations
B Piot, M Geist, O Pietquin
Joint European Conference on Machine Learning and Knowledge Discovery in …, 2014
Approximate modified policy iteration
B Scherrer, V Gabillon, M Ghavamzadeh, M Geist
arXiv preprint arXiv:1205.3054, 2012
Managing uncertainty within the ktd framework
M Geist, O Pietquin
Proceedings of the Workshop on Active Learning and Experimental Design (AL&E …, 2010
A cascaded supervised learning approach to inverse reinforcement learning
E Klein, B Piot, M Geist, O Pietquin
Joint European conference on machine learning and knowledge discovery in …, 2013
Parametric value function approximation: A unified view
M Geist, O Pietquin
2011 IEEE Symposium on Adaptive Dynamic Programming And Reinforcement …, 2011
Kalman Temporal Differences: the deterministic case
M Geist, O Pietquin, G Fricout
2009 IEEE Symposium on Adaptive Dynamic Programming and Reinforcement …, 2009
Performance evaluation for particle filters
R Chou, Y Boers, M Podt, M Geist
14th International Conference on Information Fusion, 1-7, 2011
A Dantzig selector approach to temporal difference learning
M Geist, B Scherrer, A Lazaric, M Ghavamzadeh
arXiv preprint arXiv:1206.6480, 2012
The system can't perform the operation now. Try again later.
Articles 1–20