Urmăriți
Ofir Nachum
Ofir Nachum
Google Brain
Adresă de e-mail confirmată pe google.com
Titlu
Citat de
Citat de
Anul
Data-Efficient Hierarchical Reinforcement Learning
O Nachum, S Gu, H Lee, S Levine
Advances in Neural Information Processing Systems, 2018
6852018
D4rl: Datasets for deep data-driven reinforcement learning
J Fu, A Kumar, O Nachum, G Tucker, S Levine
arXiv preprint arXiv:2004.07219, 2020
4962020
Behavior regularized offline reinforcement learning
Y Wu, G Tucker, O Nachum
arXiv preprint arXiv:1911.11361, 2019
4142019
A Lyapunov-based Approach to Safe Reinforcement Learning
Y Chow, O Nachum, E Duenez-Guzman, M Ghavamzadeh
Advances in Neural Information Processing Systems, 2018
4142018
Bridging the gap between value and policy based reinforcement learning
O Nachum, M Norouzi, K Xu, D Schuurmans
Advances in neural information processing systems 30, 2017
3982017
Learning to remember rare events
Ł Kaiser, O Nachum, A Roy, S Bengio
International Conference for Learning Representations, 2017
3732017
Morphnet: Fast & simple resource-constrained structure learning of deep networks
A Gordon, E Eban, O Nachum, B Chen, H Wu, TJ Yang, E Choi
Proceedings of the IEEE conference on computer vision and pattern …, 2018
3312018
Dualdice: Behavior-agnostic estimation of discounted stationary distribution corrections
O Nachum, Y Chow, B Dai, L Li
Advances in Neural Information Processing Systems 32, 2019
2342019
Deepmdp: Learning continuous latent space models for representation learning
C Gelada, S Kumar, J Buckman, O Nachum, MG Bellemare
International Conference on Machine Learning, 2170-2179, 2019
2192019
Identifying and correcting label bias in machine learning
H Jiang, O Nachum
International Conference on Artificial Intelligence and Statistics, 702-712, 2020
2112020
Deep Reinforcement Learning for Vision-Based Robotic Grasping: A Simulated Comparative Evaluation of Off-Policy Methods
D Quillen, E Jang, O Nachum, C Finn, J Ibarz, S Levine
IEEE International Conference on Robotics and Automation, 2018
2092018
Lyapunov-based safe policy optimization for continuous control
Y Chow, O Nachum, A Faust, E Duenez-Guzman, M Ghavamzadeh
arXiv preprint arXiv:1901.10031, 2019
1832019
Near-optimal representation learning for hierarchical reinforcement learning
O Nachum, S Gu, H Lee, S Levine
arXiv preprint arXiv:1810.01257, 2018
1742018
Algaedice: Policy gradient from arbitrary experience
O Nachum, B Dai, I Kostrikov, Y Chow, L Li, D Schuurmans
arXiv preprint arXiv:1912.02074, 2019
1532019
Offline reinforcement learning with fisher divergence critic regularization
I Kostrikov, R Fergus, J Tompson, O Nachum
International Conference on Machine Learning, 5774-5783, 2021
1462021
Trust-pcl: An off-policy trust region method for continuous control
O Nachum, M Norouzi, K Xu, D Schuurmans
International Conference for Learning Representations, 2018
1122018
Rl unplugged: A suite of benchmarks for offline reinforcement learning
C Gulcehre, Z Wang, A Novikov, T Paine, S Gómez, K Zolna, R Agarwal, ...
Advances in Neural Information Processing Systems 33, 7248-7259, 2020
110*2020
Imitation learning via off-policy distribution matching
I Kostrikov, O Nachum, J Tompson
arXiv preprint arXiv:1912.05032, 2019
1102019
Opal: Offline primitive discovery for accelerating offline reinforcement learning
A Ajay, A Kumar, P Agrawal, S Levine, O Nachum
arXiv preprint arXiv:2010.13611, 2020
992020
Deployment-efficient reinforcement learning via model-based offline optimization
T Matsushima, H Furuta, Y Matsuo, O Nachum, S Gu
arXiv preprint arXiv:2006.03647, 2020
972020
Sistemul nu poate realiza operația în acest moment. Încercați din nou mai târziu.
Articole 1–20