Reinforcement Learning Coach: Data Science at Scale

At the Intel AI Lab, we are committed to enabling the use of state-of-the-art artificial intelligence algorithms in data science. One year ago, we open-sourced Reinforcement Learning Coach – a comprehensive framework that enables reinforcement learning (RL) agent development, training, and evaluation. Since then we have been working hard to add more algorithms and simulation environments, support more complex features and settings, and provide reproducible, high-quality implementation of state of the art reinforcement learning algorithms. In the past few weeks, we’ve been busy adding even more algorithms and features. We’ve also been working with the Amazon SageMaker* team to integrate Coach with the SageMaker platform, and enable developers and data scientists to build, train and deploy RL-based solutions at scale. We are happy to share the results of these shared efforts in the new Coach 0.11.0 release.

Horizontal Scaling

Reinforcement Learning Coach is built with the Intel-optimized version of TensorFlow* to enable efficient training of RL agents on multi-core CPUs. While training on a single node is done efficiently, complex problems such as playing strategy games, or problems that require heavy compute simulations often require a distributed system in order to converge in a reasonable amount of time. In the new Coach 0.11.0 release, we have added horizontal scaling support that allows training an RL agent with multiple actors. Please refer to our tutorial on how to set up a cluster and train with multiple nodes.

Integration with Amazon SageMaker

Amazon SageMaker is a fully managed platform that enables developers and data scientists to quickly build, train, and deploy machine learning models at any scale. We have been working with the SageMaker team to integrate Reinforcement Learning Coach with SageMakerto enable building and training of RL-based solutions.

Additional Features and Algorithms

As part of the work with the SageMaker team, Coach now supports MxNet* in addition to TensorFlow. We plan to add support for more frameworks in future releases. Additionally, trained models can now be exported using ONNX* to be used in deep learning frameworks not currently supported by Coach.

We continuously add RL algorithms to Coach, and with the two latest additions — Rainbowand Conditional Imitation Learning — Coach now supports more than 23 algorithms. For easier navigation, and to simplify the process of choosing an algorithm relevant to a specific problem, we have added a small utility for algorithm filtering by criteria.

The Intel AI Lab has been using Reinforcement Learning Coach during the past year for research and data science efforts within Intel. We’d be happy to get feedback on additional features that may be useful, and on your experience using Coach at coach@intel.com or as issues on our GitHub repo.

Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration. No computer system can be absolutely secure. Check with your system manufacturer or retailer or learn more at intel.com.
Intel and the Intel logo are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries. 
*Other names and brands may be claimed as the property of others.  
© Intel Corporation

Related Posts