Blog

Jan 29, 2016   |   Stewart Hall

Faster Training in neon with Multiple GPUs on the Nervana Cloud

The Nervana Cloud provides unprecedented performance, ease of use, and the ability to apply deep learning to a large range of machine learning problems. With modern networks taking days, weeks or even months to train, performance is one of our fundamental goals. GPUs allow us to greatly improve performance by parallelizing convolution and matrix multiply…

Read more

#Intel DL Cloud & Systems #neon

Jan 19, 2016   |   Scott Leishman

neon v1.1.5 released!

Highlights from this release include:  * CUDA kernels for lookuptable layer. This results in a 4x speedup for our sentiment analysis model example * support for determinstic Conv layer updates * custom dataset walkthrough utilizing bAbI data * reduced number of threads in deep reduction EW kernels [#171] * additional (de)serialization routines [#106] * CPU…

Read more

#Release Notes

pre trained deep learning models

Dec 29, 2015   |   Tambet Matiisen

Guest Post (Part II): Deep Reinforcement Learning with Neon

This is part 2 of a blog series on deep reinforcement learning. See part 1 “Demystifying Deep Reinforcement Learning” for an introduction to the topic. The first time we read DeepMind’s paper “Playing Atari with Deep Reinforcement Learning” in our research group, we immediately knew that we wanted to replicate this incredible result. It was…

Read more

#neon #Reinforcement Learning

Dec 22, 2015   |   Tambet Matiisen

Guest Post (Part I): Demystifying Deep Reinforcement Learning

Two years ago, a small company in London called DeepMind uploaded their pioneering paper “Playing Atari with Deep Reinforcement Learning” to Arxiv. In this paper they demonstrated how a computer learned to play Atari 2600 video games by observing just the screen pixels and receiving a reward when the game score increased. The result was remarkable,…

Read more

#neon #Reinforcement Learning

Dec 03, 2015   |   Scott Leishman

neon v1.1.3 released!

Highlights from this release include: * deconvolution and weight histogram visualization examples and documentation * CPU convolution and pooling layer speedups (~2x faster) * bAbI question and answer interactive demo, dataset support. * various ImageLoader enhancements. * interactive usage improvements (shortcut Callback import, multiple Callbacks init, doc updates, single item batch size support) * set…

Read more

#Release Notes

Nov 04, 2015   |   JD Co-Reyes

Intern Spotlight: Implementing Language Models

Recurrent Neural Networks During my internship at Nervana Systems, I got to implement a few language models using Recurrent Neural Networks (RNN’s) and achieved a significant speedup in training image captioning models. RNN’s are good at learning relationships over sequences of data. So for example, a RNN could be fed characters of Shakespearean text, learn…

Read more

#Model Zoo #RNNs

Oct 31, 2015   |   Scott Leishman

neon v1.1.0 released!

Highlights from this new release include: * Sentiment analysis support (LSTM lookupTable based), new IMDB example network * Support for merge and branch layer stacks via the introduction of LayerContainers * Support for freezing layer stacks * Adagrad based optimizer * new GPU kernels for fast compounding batch norm, conv and pooling engine updates, new…

Read more

#Release Notes

Stay Connected

Get the latest from Intel AI