Follow
Philip Thomas
Title
Cited by
Cited by
Year
Data-efficient off-policy policy evaluation for reinforcement learning
P Thomas, E Brunskill
International Conference on Machine Learning, 2139-2148, 2016
5452016
Value function approximation in reinforcement learning using the Fourier basis
G Konidaris, S Osentoski, P Thomas
Proceedings of the AAAI Conference on Artificial Intelligence 25 (1), 380-385, 2011
4502011
High-confidence off-policy evaluation
P Thomas, G Theocharous, M Ghavamzadeh
Proceedings of the AAAI Conference on Artificial Intelligence 29 (1), 2015
2662015
High confidence policy improvement
P Thomas, G Theocharous, M Ghavamzadeh
International Conference on Machine Learning, 2380-2388, 2015
1932015
Ad recommendation systems for life-time value optimization
G Theocharous, PS Thomas, M Ghavamzadeh
Proceedings of the 24th international conference on world wide web, 1305-1310, 2015
1662015
Increasing the action gap: New operators for reinforcement learning
MG Bellemare, G Ostrovski, A Guez, P Thomas, R Munos
Proceedings of the AAAI Conference on Artificial Intelligence 30 (1), 2016
1462016
Bias in natural actor-critic algorithms
P Thomas
International conference on machine learning, 441-448, 2014
1442014
Preventing undesirable behavior of intelligent machines
P Thomas, B Castro da Silva, A Barto, S Giguere, Y Brun, E Brunskill
Science 366 (6468), 999-1004, 2019
1432019
Learning action representations for reinforcement learning
Y Chandak, G Theocharous, J Kostas, S Jordan, P Thomas
International conference on machine learning, 941-950, 2019
1402019
Safe reinforcement learning
PS Thomas
1002015
Proximal reinforcement learning: A new theory of sequential decision making in primal-dual spaces
S Mahadevan, B Liu, P Thomas, W Dabney, S Giguere, N Jacek, I Gemp, ...
arXiv preprint arXiv:1405.6757, 2014
552014
Training an actor-critic reinforcement learning controller for arm movement using human-generated rewards
KM Jagodnik, PS Thomas, AJ van den Bogert, MS Branicky, RF Kirsch
IEEE Transactions on Neural Systems and Rehabilitation Engineering 25 (10 …, 2017
542017
Predictive off-policy policy evaluation for nonstationary decision problems, with applications to digital marketing
P Thomas, G Theocharous, M Ghavamzadeh, I Durugkar, E Brunskill
Proceedings of the AAAI Conference on Artificial Intelligence 31 (2), 4740-4745, 2017
542017
Is the policy gradient a gradient?
C Nota, PS Thomas
arXiv preprint arXiv:1906.07073, 2019
492019
Policy gradient methods for reinforcement learning with function approximation and action-dependent baselines
PS Thomas, E Brunskill
arXiv preprint arXiv:1706.06643, 2017
482017
Evaluating the performance of reinforcement learning algorithms
S Jordan, Y Chandak, D Cohen, M Zhang, P Thomas
International Conference on Machine Learning, 4962-4973, 2020
462020
Use of atrial and bifocal cardiac pacemakers for treating resistant dysrhythmias.
LS Dreifus, BV Berkovits, D Kimibiris, K Moghadam, G Haupt, P Walinsky, ...
European Journal of Cardiology 3 (4), 257-266, 1975
451975
Optimizing for the future in non-stationary mdps
Y Chandak, G Theocharous, S Shankar, M White, S Mahadevan, ...
International Conference on Machine Learning, 1414-1425, 2020
442020
Using options and covariance testing for long horizon off-policy policy evaluation
Z Guo, PS Thomas, E Brunskill
Advances in Neural Information Processing Systems 30, 2017
432017
Some recent applications of reinforcement learning
AG Barto, PS Thomas, RS Sutton
Proceedings of the Eighteenth Yale Workshop on Adaptive and Learning Systems, 2017
412017
The system can't perform the operation now. Try again later.
Articles 1–20