Highlights from this new release include:

* Sentiment analysis support (LSTM lookupTable based), new IMDB example network
* Support for merge and branch layer stacks via the introduction of LayerContainers
* Support for freezing layer stacks
* Adagrad based optimizer
* new GPU kernels for fast compounding batch norm, conv and pooling engine updates, new kernel build system and flags
* Modifications for Caffe support. Note that this may break backwards compatibility with previously serialized strided conv net models, see: http://neon.nervanasys.com/docs/latest/faq.html for details
* Default training cost display during progress bar is now calculated on a rolling window basis rather than from the beginning of each epoch
* Separate layer configuration and initialization steps
* Callback enhancements and updates. Note that validation_frequency renamed to evaluation_frequency
* Miscellaneous bug fixes and documentation updates throughout.

As always, you can grab this release from github at: https://github.com/NervanaSystems/neon

 

Scott Leishman
Algorithms Engineer

Related Blog Posts

neon™ 2.6.0: Inference Optimizations for Single Shot MultiBox Detector on Intel® Xeon® Processor Architectures

We are excited to release the neon™ 2.6.0 framework, which features improvements for CPU inference path on a VGG-16 based Single Shot multibox Detector (SSD) neural network. These updates, along with the training optimizations released in neon 2.5.0, show that neon is gaining significant boosts in both training and inference performance.  (Granular configuration details, as well…

Read more

#Release Notes

Reinforcement Learning Coach v0.9

Since the release of Coach a couple of months ago, we have been working hard to push it into new frontiers that will improve its usability for real world applications. In this release, we are introducing several new features that will move Coach forward in this direction. Imitation Learning First, we added several convenient tools…

Read more

#Release Notes #Technology

neon v2.3.0: Significant Performance Boost for Deep Speech 2 and VGG models

We are excited to announce the release of neon™ 2.3.0.  It ships with significant performance improvements for Deep Speech 2 (DS2) and VGG models running on Intel® architecture (IA). For the DS2 model, our tests show up to 6.8X improvement1,4 with the  (Intel® MKL) backend over the NumPy CPU backend with neon™ 2.3.0, and more…

Read more

#Release Notes