Reinforcement Learning Coach v0.9
Dec 19, 2017
Dec 19, 2017
Since the release of Coach a couple of months ago, we have been working hard to push it into new frontiers that will improve its usability for real world applications. In this release, we are introducing several new features that will move Coach forward in this direction.
First, we added several convenient tools for imitation, along with the basic behavioral cloning imitation algorithm. Imitation learning can often be very efficient for achieving very good behavior fast, and is an important addition to Coach’s toolbox. Coach now allows users to interact with the simulation environments and collect data from human examples. Additionally, it supports loading a previously collected dataset of experience and training an agent to imitate the behavior in the given dataset. As a starting point, we added a few presets and datasets for several environments in Doom and Gym.
A Doom agent and a Montezuma Revenge agent trained using Behavioral Cloning
The second addition is a built-in support for the recently released CARLA simulator. CARLA is an open-source urban driving simulator developed as a collaboration between Intel Labs and the Computer Vision Center (CVC) that includes realistic urban environments. CARLA enables the training of autonomous driving agents and is now integrated with Coach. We also added several presets for training both reinforcement learning and imitation learning agents for simple driving behaviors.
A CARLA agent trained using reinforcement learning
Finally, to keep up with the state-of-the-art in the field of reinforcement learning, we recently added the Quantile Regression DQN algorithm, which was shown to achieve superior results over the Categorical DQN algorithm on the Atari benchmark.
To conclude, we believe that the CARLA simulator, along with tools for imitation learning, open a new world of possibilities for users that are interested in applying reinforcement learning to real world applications. Go ahead and try it out by following the instructions on our GitHub repository.
 CARLA: An Open Urban Driving Simulator Alexey Dosovitskiy, German Ros, Felipe Codevilla, Antonio Lopez and Vladlen Koltun. CoRR, abs/1711.03938, 2017.
 Distributional Reinforcement Learning with Quantile Regression Will Dabney, Mark Rowland, Marc G. Bellemare and Rémi Munos. CoRR, abs/1710.10044, 2017.
 A Distributional Perspective on Reinforcement Learning Marc G. Bellemare, Will Dabney and Rémi Munos. CoRR, abs/1707.06887, 2017.
We are excited to release the neon™ 2.6.0 framework, which features improvements for CPU inference path on a VGG-16 based Single Shot multibox Detector (SSD) neural network. These updates, along with the training optimizations released in neon 2.5.0, show that neon is gaining significant boosts in both training and inference performance. (Granular configuration details, as well…
We are excited to announce the release of neon™ 2.3.0. It ships with significant performance improvements for Deep Speech 2 (DS2) and VGG models running on Intel® architecture (IA). For the DS2 model, our tests show up to 6.8X improvement1,4 with the (Intel® MKL) backend over the NumPy CPU backend with neon™ 2.3.0, and more…
We are excited to announce the availability of neon™ 2.1 framework. An optimized backend based on Intel® Math Kernel Library (Intel® MKL), is enabled by default on CPU platforms with this release. neon™ 2.1 also uses a newer version of the Intel ® MKL for Deep Neural Networks (Intel ® MKL-DNN), which features optimizations for…
Get the latest from Intel AI