neon v1.1.3 released!
Dec 03, 2015
Dec 03, 2015
Highlights from this release include:
* deconvolution and weight histogram visualization examples and documentation
* CPU convolution and pooling layer speedups (~2x faster)
* bAbI question and answer interactive demo, dataset support.
* various ImageLoader enhancements.
* interactive usage improvements (shortcut Callback import, multiple Callbacks init, doc updates, single item batch size support)
* set default verbosity level to warning
* CIFAR10 example normalization updates
* CUDA detection enhancements [#132]
* only parse batch_writer arguments when used as a script, allow undefined global_mean [#137, #140]
As always, you can grab this release from github at: https://github.com/NervanaSystems/neon
Fig. 1: Deconvolution visualization example
Fig. 2: Weight histogram visualization example
We are excited to release the neon™ 2.6.0 framework, which features improvements for CPU inference path on a VGG-16 based Single Shot multibox Detector (SSD) neural network. These updates, along with the training optimizations released in neon 2.5.0, show that neon is gaining significant boosts in both training and inference performance. (Granular configuration details, as well…
Since the release of Coach a couple of months ago, we have been working hard to push it into new frontiers that will improve its usability for real world applications. In this release, we are introducing several new features that will move Coach forward in this direction. Imitation Learning First, we added several convenient tools…
We are excited to announce the release of neon™ 2.3.0. It ships with significant performance improvements for Deep Speech 2 (DS2) and VGG models running on Intel® architecture (IA). For the DS2 model, our tests show up to 6.8X improvement1,4 with the (Intel® MKL) backend over the NumPy CPU backend with neon™ 2.3.0, and more…
Get the latest from Intel AI