Aravind Rajeswaran
Aravind Rajeswaran
University of Washington, Facebook AI Research (FAIR)
Adresse e-mail validée de cs.washington.edu - Page d'accueil
Titre
Citée par
Citée par
Année
Learning complex dexterous manipulation with deep reinforcement learning and demonstrations
A Rajeswaran, V Kumar, A Gupta, G Vezzani, J Schulman, E Todorov, ...
Robotics: Science and Systems (RSS), 2018
2942018
EPOpt: Learning Robust Neural Network Policies Using Model Ensembles
A Rajeswaran, S Ghotra, B Ravindran, S Levine
International Conference on Learning Representations (ICLR), 2017
2022017
Towards generalization and simplicity in continuous control
A Rajeswaran, K Lowrey, E Todorov, S Kakade
arXiv preprint arXiv:1703.02660, 2017
1912017
Meta-Learning with Implicit Gradients
A Rajeswaran, C Finn, S Kakade, S Levine
Advances in Neural Information Processing Systems (NeurIPS), 2019
1572019
Online Meta-Learning
C Finn, A Rajeswaran, S Kakade, S Levine
International Conference on Machine Learning (ICML), 2019
1382019
Plan Online, Learn Offline: Efficient Learning and Exploration via Model-Based Control
K Lowrey, A Rajeswaran, S Kakade, E Todorov, I Mordatch
International Conference on Learning Representations (ICLR), 2019
882019
Identifying topology of low voltage distribution networks based on smart meter data
SJ Pappu, N Bhatt, R Pasumarthy, A Rajeswaran
IEEE Transactions on Smart Grid 9 (5), 5113-5122, 2017
842017
Variance reduction for policy gradient with action-dependent factorized baselines
C Wu, A Rajeswaran, Y Duan, V Kumar, AM Bayen, S Kakade, I Mordatch, ...
International Conference on Learning Representations (ICLR), 2018
812018
Divide-and-conquer reinforcement learning
D Ghosh, A Singh, A Rajeswaran, V Kumar, S Levine
International Conference on Learning Representations (ICLR), 2018
642018
Dexterous manipulation with deep reinforcement learning: Efficient, general, and low-cost
H Zhu, A Gupta, A Rajeswaran, S Levine, V Kumar
International Conference on Robotics and Automation (ICRA), 2019
552019
MOReL: Model-Based Offline Reinforcement Learning
R Kidambi, A Rajeswaran, P Netrapalli, T Joachims
Advances in Neural Information Processing Systems (NeurIPS), 2020
372020
Reinforcement learning for non-prehensile manipulation: Transfer from simulation to physical system
K Lowrey, S Kolev, J Dao, A Rajeswaran, E Todorov
2018 IEEE International Conference on Simulation, Modeling, and Programming …, 2018
332018
A graph partitioning algorithm for leak detection in water distribution networks
A Rajeswaran, S Narasimhan, S Narasimhan
Computers & Chemical Engineering 108, 11-23, 2018
222018
A novel approach for phase identification in smart grids using graph theory and principal component analysis
SP Jayadev, A Rajeswaran, NP Bhatt, R Pasumarthy
2016 American Control Conference (ACC), 5026-5031, 2016
182016
A Game Theoretic Framework for Model Based Reinforcement Learning
A Rajeswaran, I Mordatch, V Kumar
International Conference on Machine Learning, 7953-7963, 2020
152020
Learning deep visuomotor policies for dexterous hand manipulation
D Jain, A Li, S Singhal, A Rajeswaran, V Kumar, E Todorov
2019 International Conference on Robotics and Automation (ICRA), 3636-3643, 2019
142019
Network topology identification using PCA and its graph theoretic interpretations
A Rajeswaran, S Narasimhan
arXiv preprint arXiv:1506.00438, 2015
82015
Lyceum: An efficient and scalable ecosystem for robot learning
C Summers, K Lowrey, A Rajeswaran, S Srinivasa, E Todorov
Learning for Dynamics and Control, 793-803, 2020
52020
Combo: Conservative offline model-based policy optimization
T Yu, A Kumar, R Rafailov, A Rajeswaran, S Levine, C Finn
arXiv preprint arXiv:2102.08363, 2021
32021
Offline Reinforcement Learning from Images with Latent Space Models
R Rafailov, T Yu, A Rajeswaran, C Finn
arXiv preprint arXiv:2012.11547, 2020
32020
Le système ne peut pas réaliser cette opération maintenant. Veuillez réessayer plus tard.
Articles 1–20