Category: Tutorial

MLT: The Keras of Kubernetes*

Running distributed machine learning workloads has been a hot topic lately.  Intel has shared documents walk through the process of using Kubeflow* to run distributed TensorFlow* jobs with Kubernetes, as well as a blog on using the Volume Controller for Kubernetes (KVC) for data management on clusters, and a blog describing a real-world use case…

Read More

Our Friend the Object Store

This is your Tensorflow I/O operation: This is your Tensorflow I/O operation on S3: Any questions? … ...Oh wait, you have a ton of questions and doing this awesome thing interests you greatly? What’s going on here? Back in July 2017, Yong Tang added an S3 backend for Tensorflow’s Filesystem interface. This means almost anywhere…

Read More

Amazing Inference Performance with Intel® Xeon® Scalable Processors

Over the past year, Intel has focused on optimizing popular deep learning frameworks and primitives for Intel® Xeon® processors.  Now, in addition to being a common platform for inference workloads, Intel Xeon Scalable processor (formerly codename Skylake-SP) is a competitive platform for both training and inference. Previously, deep learning training and inference on CPUs took…

Read More

Let’s Flow within Kubeflow

In this blog post, we will go through how to train MNIST using distributed Tensorflow* and Kubeflow* from scratch. Introduction Machine learning (ML) and deep learning (DL) have been around for more than half a century now, yet it is just as of late that these ideas have begun to flourish—thanks to advancements in compute…

Read More

Learn about neon™ with the Nervana Deep Learning Course

Intel Nervana is excited to share a series of short Nervana videos and accompanying exercises to learn how to build deep learning models with neon, our deep learning framework. We start with a basic introduction into deep learning concepts, provide an overview of the neon framework, and discuss key neon concepts such as loading data…

Read More