Author Bio Image

Elson Rodriguez

Cloud Software Engineer, Artificial Intelligence Products Group

This is your Tensorflow I/O operation:

This is your Tensorflow I/O operation on S3:

Any questions?

…Oh wait, you have a ton of questions and doing this awesome thing interests you greatly?

What’s going on here?

Back in July 2017, Yong Tang added an S3 backend for Tensorflow’s Filesystem interface. This means almost anywhere that Tensorflow IO operations are used, an S3 path can be used instead.

To use this feature we’ll need to follow so few steps that we can enumerate them in a big bold font to make it look super simple.

Step 1: Define your S3 parameters

The S3 backend takes environment variables for its configurations. Start with the values below and modify them according to your S3 environment:

Step 2: Use Tensorflow

Next, use Tensorflow.

Take your favorite model and try it out! Simply swap any paths in your model with an S3 URL. For the linked model, this is controlled by an environment variable:

Or you can just do the smoke test we started this post with:

Also, almost every utility in the Tensorflow ecosystem will also respect an S3 path:

But Why?

When it comes to storing data, there are many options, each with benefits and drawbacks. The most common is the local filesystem. However, this is inherently unscalable and is a non-starter for distributed training. Shared filesystems are also available, but implementations tend to be rare on Cloud Service Providers, and an error on the server side can mean a hung training job, problematic mounts, or a node reboot. While an object store isn’t a panacea for your IO woes (worse, it may actually perform slower), it offers a resilience, simplicity, and ubiquity that shared filesystems can’t match.

What if I don’t have AWS?

While Amazon invented S3, the simple semantics of the interface have caused it to become the defacto object store API. Google’s Cloud Storage is interoperable, There are guides on proxying requests for Azure Blob Store, and countless others provide S3-compatible storage solutions.

However, one solution that stood out to me, especially for ease of use, was Minio. Minio is a distributed S3-compatible object store written in Go, it is SUPER simple to deploy, and has an amazingly responsive team.

I do most of my work on Kubernetes, and I was easily able to tailor their examples to deploy on a bare-metal cluster with no storageclass setup.

This means that even in your datacenter, with no existing storage solution, you can get up and running with S3 in no time!

Now what?

Try it out with your model, or check out the mnist example in the Kubeflow project for an end to end S3-based workflow.

Also, if you need more performance for data loading, check out Balaji Subramaniam’s Kube Volume Controller to cache S3 data locally to your workloads.

While S3 support in Tensorflow is relatively young, S3 has seen success throughout the IT industry, and this intersection of object storage and machine learning will allow us to look at our storage solutions in a new context.

Author Bio Image

Elson Rodriguez

Cloud Software Engineer, Artificial Intelligence Products Group

Related Blog Posts

Applying Deep Learning to Genomics Analysis

Synthetic Genomics, Incorporated (SGI) is a synthetic biology company that aims to bring genomic-driven solutions to market. They design and build biological systems and conduct interdisciplinary research by combining biology and engineering to address global sustainability problems SGI asked for Intel’s help to conduct a deep learning proof of concept that would automatically tag a…

Read more


High-throughput Object Detection on Edge Platforms with FPGA

This article introduces software and deep neural network architecture (DNN) level optimizations and tweaks to achieve high throughput with deep learning based object detection applications and FPGAs on edge platforms. The ideas presented here could be generalized to speed up other compute-intensive applications on edge platforms as well. Contents Overview of Edge Platforms Introduction to…

Read more


Amazing Inference Performance with Intel® Xeon® Scalable Processors

[bc playerid="4090876643001" videoid="5780439477001"] Over the past year, Intel has focused on optimizing popular deep learning frameworks and primitives for Intel® Xeon® processors.  Now, in addition to being a common platform for inference workloads, Intel Xeon Scalable processor (formerly codename Skylake-SP) is a competitive platform for both training and inference. Previously, deep learning training and inference…

Read more

#Technology #Tutorial