Author Bio Image

Jennifer Myers

Senior Director Deep Learning Frameworks, Artificial Intelligence Products Group

Highlights from this release include: 

  • Faster RCNN model
  • Sequence to Sequence container and char_rae recurrent autoencoder model
  • Reshape Layer that reshapes the input[#221]
  • Pip requirements in requirements.txt updated to latest versions [#289]
  • Remove deprecated data loaders and update docs
  • Use NEON_DATA_CACHE_DIR envvar as archive dir to store DataLoader ingested data
  • Eliminate type conversion for FP16 for CUDA compute capability >= 5.2
  • Use GEMV kernels for batch size 1
  • Alter delta buffers for nesting of merge-broadcast layers
  • Support for ncloud real-time logging
  • Add fast_style Makefile target
  • Fix Python 3 builds on Ubuntu 16.04
  • Run setup.py for sysinstall to generate version.py [#282]
  • Fix broken link in mnist docs
  • Fix conv/deconv tests for CPU execution and fix i32 data type
  • Fix for average pooling with batch size 1
  • Change default scale_min to allow random cropping if omitted
  • Fix yaml loading
  • Fix bug with image resize during injest
  • Update references to the ModelZoo and neon examples to their new locations

neon v1.6.0 release notes for neon

As always, you can grab this release from github at: https://github.com/NervanaSystems/neon

Author Bio Image

Jennifer Myers

Senior Director Deep Learning Frameworks, Artificial Intelligence Products Group

Related Blog Posts

neon™ 2.6.0: Inference Optimizations for Single Shot MultiBox Detector on Intel® Xeon® Processor Architectures

We are excited to release the neon™ 2.6.0 framework, which features improvements for CPU inference path on a VGG-16 based Single Shot multibox Detector (SSD) neural network. These updates, along with the training optimizations released in neon 2.5.0, show that neon is gaining significant boosts in both training and inference performance.  (Granular configuration details, as well…

Read more

#Release Notes

Reinforcement Learning Coach v0.9

Since the release of Coach a couple of months ago, we have been working hard to push it into new frontiers that will improve its usability for real world applications. In this release, we are introducing several new features that will move Coach forward in this direction. Imitation Learning First, we added several convenient tools…

Read more

#Release Notes #Technology

neon v2.3.0: Significant Performance Boost for Deep Speech 2 and VGG models

We are excited to announce the release of neon™ 2.3.0.  It ships with significant performance improvements for Deep Speech 2 (DS2) and VGG models running on Intel® architecture (IA). For the DS2 model, our tests show up to 6.8X improvement1,4 with the  (Intel® MKL) backend over the NumPy CPU backend with neon™ 2.3.0, and more…

Read more

#Release Notes