Reintroducing PlaidML

Now that Vertex.AI has joined Intel’s Artificial Intelligence Products Group, we are pleased to reintroduce the PlaidML open source tensor compiler. We are committed to further maintaining and developing this project as an nGraph library back end. The PlaidML back end with nGraph will bring deep learning to new platforms. It will enable training on the built-in GPUs in Apple MacBook*and Microsoft Windows* laptops and inference on embedded devices. By adding compatibility with nGraph, PlaidML will allow TensorFlow and other frameworks to run with acceleration on a wide range of platforms. We are continuing to develop PlaidML as an open source project. Read on for a quick start tutorial and ways to get involved.

What is PlaidML?

PlaidML is a portable tensor compiler that allows deep learning to work in environments that are normally compute-limited, such as laptops and embedded devices. Tensor compilers bridge the gap between the universal mathematical descriptions of deep learning operations, such as convolution, and the platform and chip specific code needed to perform those operations with good performance.

The current version of PlaidML is compatible with most Linux*, Mac*, and Windows* OSs and most CPUs and GPUs. It does this by using its Tile language to generate precisely tailored OpenCL, OpenGL, LLVM, or CUDA code on the fly. As an example, below are the corresponding notation and PlaidML’s Tile language description for a common convolution operation:

The Tile description is then compiled into the lower-level code such as an OpenCL kernel, frequently 100 or more lines long. Note that changes to hardware, operating system, and different input shapes result in substantially different generated code.

By automating these code changes, PlaidML speeds deployment to new platforms and gives data scientists the ability to experiment more rapidly with new research ideas. This provides the key capability necessary to connect graph layers like nGraph to the diversity of configurations necessary to deploy deep learning everywhere. We plan to extend PlaidML’s capabilities to more workloads and platforms.

Intel’s Commitment to Open Source

PlaidML will continue to be available as an open source platform,  with development backed by Intel’s Artificial Intelligence Products Group. The upcoming 0.5 release of PlaidML will change the license to Apache 2.0 to improve compatibility with nGraph, TensorFlow, and other ecosystem software. Combined with nGraph and OpenVINO, PlaidML expands the deep learning capabilities of Intel’s broad silicon product portfolio and offers a framework for adoption of diverse hardware accelerators across the industry.

How You Can Get Involved

To install PlaidML and do a quick benchmark all you need is a Linux, Mac, or Windows-based computer with working Python installation:

sudo pip install plaidml plaidml-keras

git clone https://github.com/plaidml/plaidbench

cd plaidbench

pip install -r requirements.txt

python plaidbench.py mobilenet

For further details on installation and usage see the main PlaidML readme.

Closing Thoughts

Building on the current code base, we’ll be expanding our documentation to cover how to get started with development and add features. To support the latest research, we’ll be documenting the Tile language we use to add new ops (or layer types) in a device-portable way. Finally, by continuing our commitment to open source development, we’re opening the door to outside collaborators to bring deep learning to new use cases and platforms. We’d love your involvement, from letting us know your experiences, to sharing benchmark data, to code contributions. Let us know how we can help you get started. You can get in touch by joining us at PlaidML on GitHub or at ai.intel.com.

*Other names and brands may be claimed as the property of others.