Blog

Jan 24, 2018   |   Peng Zhang, Wei Wang, Baojun Liu, Jayaram Bobba

neon™ 2.6.0: Inference Optimizations for Single Shot MultiBox Detector on Intel® Xeon® Processor Architectures

We are excited to release the neon™ 2.6.0 framework, which features improvements for CPU inference path on a VGG-16 based Single Shot multibox Detector (SSD) neural network. These updates, along with the training optimizations released in neon 2.5.0, show that neon is gaining significant boosts in both training and inference performance.  (Granular configuration details, as well…

Read more

#neon #Release Notes

Nov 14, 2017   |   Wei Wang, Peng Zhang, Jayaram Bobba

neon v2.3.0: Significant Performance Boost for Deep Speech 2 and VGG models

We are excited to announce the release of neon™ 2.3.0.  It ships with significant performance improvements for Deep Speech 2 (DS2) and VGG models running on Intel® architecture (IA). For the DS2 model, our tests show up to 6.8X improvement1,4 with the Intel® Math Kernel Library (Intel® MKL) backend over the NumPy CPU backend with…

Read more

#Release Notes

BDW-SKX Normalized Throughput

Sep 18, 2017   |   Jayaram Bobba

neon v2.1.0: Leveraging Intel® Advanced Vector Extensions 512 (Intel® AVX-512)

We are excited to announce the availability of neon™ 2.1 framework. An optimized backend based on Intel® Math Kernel Library (Intel® MKL), is enabled by default on CPU platforms with this release. neon™ 2.1 also uses a newer version of the Intel ® MKL for Deep Neural Networks (Intel ® MKL-DNN), which features optimizations for…

Read more

#Release Notes

Jun 28, 2017   |   Jayaram Bobba

neon™ 2.0: Optimized for Intel® Architectures

neon™ is a deep learning framework created by Nervana Systems with industry leading performance on GPUs thanks to its custom assembly kernels and optimized algorithms. After Nervana joined Intel, we have been working together to bring superior performance to CPU platforms as well. Today, after the result of a great collaboration between the teams, we…

Read more

#Release Notes

Jun 22, 2017   |   Jason Knight

Intel® Nervana™ Graph Beta

We are building the Intel Nervana Graph project to be the LLVM for deep learning, and today we are excited to announce a beta release of our work we previously announced in a technical preview. We see the Intel Nervana Graph project as the beginning of an ecosystem of optimization passes, hardware backends and frontend…

Read more

#neon #nGraph

Jan 06, 2017   |   Yinyin Liu

Building Skip-Thought Vectors for Document Understanding

The idea of converting natural language processing (NLP) into a problem of vector space mathematics using deep learning models has been around since 2013. A word vector, from word2vec [1], uses a string of numbers to represent a word’s meaning as it relates to other words, or its context, through training. From a word vector,…

Read more

#Model Zoo #NLP

Dec 29, 2016   |   Jennifer Myers

neon v1.8.0 released!

Highlights from this release include:  * Skip Thought Vectors example * Dilated convolution support * Nesterov Accelerated Gradient option to SGD optimizer * MultiMetric class to allow wrapping Metric classes * Support for serializing and deserializing encoder-decoder models * Allow specifying the number of time steps to evaluate during beam search * A new community-contributed Docker image…

Read more

#Release Notes

Dec 08, 2016   |   Anthony Ndirango

End-to-end speech recognition with neon

By: Anthony Ndirango and Tyler Lee Speech is an intrinsically temporal signal. The information-bearing elements present in speech evolve over a multitude of timescales. The fine changes in air pressure at rates of hundreds to thousands of hertz convey information about the speakers, their location, and help us separate them from a noisy world. Slower changes in…

Read more

#Model Zoo #Speech Recognition

Nov 22, 2016   |   Jennifer Myers

neon v1.7.0 released!

Highlights from this release include:  Update Data Loader to aeon for flexible, multi-threaded data loading and transformations. More information can be found in the docs, but in brief, aeon: provides an easy interface to adapt existing models to your own, custom, datasets supports images, video and audio and is easy to extend with your own providers for custom…

Read more

#Release Notes

Oct 12, 2016   |   Sathish Nagappan

Accelerating Neural Networks with Binary Arithmetic

At Nervana we are deeply interested in algorithmic and hardware improvements for speeding up neural networks. One particularly exciting area of research is in low precision arithmetic. In this blog post, we highlight one particular class of low precision networks named binarized neural networks (BNNs), the fundamental concepts underlying this class, and introduce a Neon…

Read more

#neon

Stay Connected

Keep tabs on all the latest news with our monthly newsletter.