Dieter Büchler
Dieter Büchler
Group leader @ Max Planck Institute for Intelligent Systems
Verified email at - Homepage
Cited by
Cited by
Open X-Embodiment: Robotic Learning Datasets and RT-X Models
A Padalkar, A Pooley, A Jain, A Bewley, A Herzog, A Irpan, A Khazatsky, ...
arXiv preprint arXiv:2310.08864, 2023
Learning to play table tennis from scratch using muscular robots
D Büchler, S Guist, R Calandra, V Berenz, B Schölkopf, J Peters
IEEE Transactions on Robotics 38 (6), 3850-3860, 2022
Jointly learning trajectory generation and hitting point prediction in robot table tennis
Y Huang, D Büchler, O Koç, B Schölkopf, J Peters
2016 IEEE-RAS 16th International Conference on Humanoid Robots (Humanoids …, 2016
A lightweight robotic arm with pneumatic muscles for robot learning
D Büchler, H Ott, J Peters
2016 IEEE International Conference on Robotics and Automation (ICRA), 4086-4092, 2016
Control of Musculoskeletal Systems using Learned Dynamics Models
D Büchler, R Calandra, B Schölkopf, J Peters
IEEE Robotics and Automation Letters 3 (4), 3161-3168, 2018
Hierarchical reinforcement learning with timed subgoals
N Gürtler, D Büchler, G Martius
Advances in Neural Information Processing Systems 34, 21732-21743, 2021
Learning to control highly accelerated ballistic movements on muscular robots
D Büchler, R Calandra, J Peters
Robotics and Autonomous Systems, 104230, 2022
DEP-RL: Embodied Exploration for Reinforcement Learning in Overactuated and Musculoskeletal Systems
P Schumacher, D Häufle, D Büchler, S Schmitt, G Martius
International Conference on Learning Representations (ICLR), 2023
Open X-Embodiment: Robotic learning datasets and RT-X models
OXE Collaboration, A Padalkar, A Pooley, A Jain, A Bewley, A Herzog, ...
CoRR, 2023
Action-conditional recurrent kalman networks for forward and inverse dynamics learning
V Shaj, P Becker, D Büchler, H Pandya, N van Duijkeren, CJ Taylor, ...
Conference on Robot Learning (CoRL), 765-781, 2021
Learning with Muscles: Benefits for Data-Efficiency and Robustness in Anthropomorphic Tasks
I Wochner, P Schumacher, G Martius, D Büchler, S Schmitt, D Haeufle
Conference on Robot Learning (CoRL), 1178-1188, 2022
Open X-Embodiment: Robotic Learning Datasets and RT-X Models
Q Vuong, S Levine, HR Walke, K Pertsch, A Singh, R Doshi, C Xu, J Luo, ...
Towards Generalist Robots: Learning Paradigms for Scalable Skill Acquisition …, 2023
A Learning-based Iterative Control Framework for Controlling a Robot Arm with Pneumatic Artificial Muscles
H Ma, D Büchler, B Schölkopf, M Muehlebach
Robotics: Science and Systems (R:SS), 2022
Hidden Parameter Recurrent State Space Models For Changing Dynamics Scenarios
V Shaj, D Büchler, R Sonker, P Becker, G Neumann
International Conference on Learning Representations (ICLR), 2021
Black-Box vs. Gray-Box: A Case Study on Learning Table Tennis Ball Trajectory Prediction with Spin and Impacts
J Achterhold, P Tobuschat, H Ma, D Buechler, M Muehlebach, J Stueckler
Learning for Dynamics and Control Conference (L4DC), 878-890, 2023
The o80 C++ templated toolbox: Designing customized Python APIs for synchronizing realtime processes
V Berenz, M Naveau, F Widmaier, M Wüthrich, JC Passy, S Guist, ...
AIMY: an open-source table tennis ball launcher for versatile and high-fidelity trajectory generation
A Dittrich, J Schneider, S Guist, N Gürtler, H Ott, T Steinbrenner, ...
2023 IEEE International Conference on Robotics and Automation (ICRA), 3058-3064, 2023
Reinforcement learning with model-based feedforward inputs for robotic table tennis
H Ma, D Büchler, B Schölkopf, M Muehlebach
Autonomous Robots 47 (8), 1387-1403, 2023
Investigating the Impact of Action Representations in Policy Gradient Algorithms
J Schneider, P Schumacher, D Häufle, B Schölkopf, D Büchler
Workshop on effective Representations, Abstractions, and Priors for Robot …, 2023
Hindsight States: Blending Sim and Real Task Elements for Efficient Reinforcement Learning
S Guist, J Schneider, A Dittrich, V Berenz, B Schölkopf, D Büchler
Robotics: Science and Systems (R:SS), 2023
The system can't perform the operation now. Try again later.
Articles 1–20