Blog

Sep 09, 2016   |   Mark Robins

Simplified ncloud syntax and other improvements to Nervana Cloud

Nervana Cloud is a full-stack hosted platform for deep learning that enables businesses to develop and deploy high-accuracy deep learning solutions at a fraction of the cost of building their own infrastructure and data science teams. We recently updated Nervana Cloud’s ncloud command-line interface (CLI) syntax to support subcommands and shortcuts for improved usability and…

Read more

#Intel DL Cloud & Systems

Aug 09, 2016   |   Naveen Rao

Intel + Nervana

Today, we’re excited to announce the planned acquisition of Nervana by Intel*.  With this acquisition, Intel is formally committing to pushing the forefront of AI technologies.  Nervana intends to continue all existing development efforts including the Nervana Neon deep learning framework, Nervana deep learning platform, and the Nervana Engine deep learning hardware.  The combination of…

Read more

#News

Aug 02, 2016   |   Carlos Morales

Securing the Deep Learning Stack

This is the first post of Nervana’s “Security Meets Deep Learning” series. Security is one of the biggest concerns for any enterprise, but it’s especially critical for companies deploying deep learning solutions since datasets often contain extremely sensitive information. Fundamentally, “security” refers to the protection of a system against the many forms of malicious attacks.…

Read more

#Intel DL Cloud & Systems

Jul 25, 2016   |   Scott Gray

Still not slowing down: Benchmarking optimized Winograd implementations

By: Scott Gray and Urs Köster This is part 3 of a series of posts on using the Winograd algorithm to make convolutional networks faster than ever before. In the second part we provided a  technical overview of how the algorithm works. Since the first set of Winograd kernels in neon, which we described in…

Read more

#neon

Jul 16, 2016   |   Hanlin Tang

Learn about neon™ with the Nervana Deep Learning Course

Intel Nervana is excited to share a series of short Nervana videos and accompanying exercises to learn how to build deep learning models with neon, our deep learning framework. We start with a basic introduction into deep learning concepts, provide an overview of the neon framework, and discuss key neon concepts such as loading data…

Read more

#neon

Jul 01, 2016   |   Jennifer Myers

neon v1.5 released!

We’re excited to release neon v1.5 with Python 2 and Python 3 support, support for Pascal GPUs (GTX 1080) and performance enhancements such as persistent RNN kernels (based on the paper by Greg Diamos at Baidu), bringing a 12x performance gain compared to v1.4.0. Highlights from this release include: Python2/Python3 compatibility [#191] Support for Pascal…

Read more

#Release Notes

Jun 29, 2016   |   Urs Köster

Going beyond full utilization: The inside scoop on Nervana's Winograd kernels

By: Urs Köster and Scott Gray This is part 2 of a series of posts on how Nervana uses the Winograd algorithm to make convolutional networks faster than ever before. In the first part we focused on benchmarks demonstrating a 2-3x algorithmic speedup. This part will get a bit more technical and dive into the guts of…

Read more

#neon

Jun 20, 2016   |   Scott Clark

Much Deeper, Much Faster: Deep Neural Network Optimization with SigOpt and Nervana Cloud

By: Scott Clark, Ian Dewancker, and Sathish Nagappan Tools like neon, Caffe, Theano, and TensorFlow make it easier than ever to build custom neural networks and to reproduce groundbreaking research. Current advancements in cloud-based platforms like the Nervana Cloud enable practitioners to seamlessly build, train, and deploy these powerful methods. Finding the best configurations of these deep nets and efficiently tuning their parameters, however, remains…

Read more

#Intel DL Cloud & Systems

May 25, 2016   |   Aravind Kalaiah

Transfer learning using neon

Introduction In the last few years plenty of deep neural net (DNN) models have been made available for a variety of applications such as classification, image recognition and speech translation. Typically, each of these models are designed for a very specific purpose, but can be extended to novel use cases. For example, one can train…

Read more

#Model Zoo #Transfer Learning

May 18, 2016   |   Carey Kloss

Nervana Engine delivers deep learning at ludicrous speed!

Nervana is currently developing the Nervana Engine, an application specific integrated circuit (ASIC) that is custom-designed and optimized for deep learning. Training a deep neural network involves many compute-intensive operations, including matrix multiplication of tensors and convolution. Graphics processing units (GPUs) are more well-suited to these operations than CPUs since GPUs were originally designed for video…

Read more

#Intel Nervana NNP