Highlights from this release include: 

  • Update Data Loader to aeon for flexible, multi-threaded data loading and transformations. More information can be found in the docs, but in brief, aeon:
    • provides an easy interface to adapt existing models to your own, custom, datasets
    • supports images, video and audio and is easy to extend with your own providers for custom modalities
    • is designed to be efficient handling large datasets and loading and augmenting data with minimal latency
  • Neural Machine Translation model
  • Remove Fast RCNN model (use Faster RCNN model instead)
  • Fix super blocking for small N with 1D conv
  • Fix update-direct conv kernel for small N
  • Add gradient clipping to Adam optimizer
  • Documentation updates and bug fixes

As always, you can grab this release from github at: https://github.com/NervanaSystems/neon

neon v1.7.0 release

Related Blog Posts

neon v2.3.0: Significant Performance Boost for Deep Speech 2 and VGG models

We are excited to announce the release of neon™ 2.3.0.  It ships with significant performance improvements for Deep Speech 2 (DS2) and VGG models running on Intel® architecture (IA). For the DS2 model, our tests show up to 6.8X improvement1,4 with the Intel® Math Kernel Library (Intel® MKL) backend over the NumPy CPU backend with…

Read more

#Release Notes

BDW-SKX Normalized Throughput

neon v2.1.0: Leveraging Intel® Advanced Vector Extensions 512 (Intel® AVX-512)

We are excited to announce the availability of neon™ 2.1 framework. An optimized backend based on Intel® Math Kernel Library (Intel® MKL), is enabled by default on CPU platforms with this release. neon™ 2.1 also uses a newer version of the Intel ® MKL for Deep Neural Networks (Intel ® MKL-DNN), which features optimizations for…

Read more

#Release Notes

neon™ 2.0: Optimized for Intel® Architectures

neon™ is a deep learning framework created by Nervana Systems with industry leading performance on GPUs thanks to its custom assembly kernels and optimized algorithms. After Nervana joined Intel, we have been working together to bring superior performance to CPU platforms as well. Today, after the result of a great collaboration between the teams, we…

Read more

#Release Notes