2017: What a Wonderful Year for AI

Dec 21, 2017

As 2017 draws to a close, we’d like to reflect on a momentous year for the Intel AI team.  Our artificial intelligence strategy is to help ensure that every data scientist, developer and practitioner has access to the best platform and easiest starting point to solve the problem being tackled. This year, we made significant progress in achieving this vision across Intel, and it’s clear the community is recognizing our advances. We would like to share some of the top highlights with you:

Intel® Nervana™ Neural Network Processor

In October,  we announced the world’s first neural network silicon designed for broad enterprise deployment – the Intel® Nervana™ Neural Network Processor (NNP) family. The Intel Nervana NNP’s innovative architecture optimizes memory and interconnects to provide more computation capability and better model scalability. It utilizes a new technology called Flexpoint, which maximizes the precision that can be stored within 16 bits, enabling the perfect combination of high memory bandwidth and algorithmic performance for end-user AI applications.

Intel® Stratix® 10 FPGAs + Microsoft “Project Brainwave”

Microsoft demonstrated its FPGA-based deep learning platform, code-named “Project Brainwave,” at the Hot Chips conference in August. Intel Stratix 10 FPGAs enable the acceleration of deep neural networks (DNNs) and are a key hardware accelerator in Microsoft’s new accelerated deep learning platform.

Intel® Xeon® Scalable Processors

This year, we introduced our most versatile data center processor ever, suited for a range of AI, big data and analytics workloads: the Intel® Xeon® Scalable Platform. Intel Xeon processors already support a majority of AI workloads today[1], and with more cores and memory bandwidth, optional integrated fabric controllers, and 512-bit vector support, the new Intel Xeon Scalable processors provide major 

advances in performance, scalability, and flexibility. They are ideal for deploying inference at scale in applications  that range from predictive medicine to image processing and analytics and  meet the demands of deep learning training.

Deep Learning and Training Tools

We’re very proud of our work with BigDL, a distributed deep learning library for Apache Spark* that we designed and optimized for Intel® Xeon® processors. Today, BigDL is deployed across six major cloud service providers and multiple enterprises, impacting hundreds of millions of end users. Some notable deployments of BigDL are in AWS EMR, Azure HDInsight, Microsoft DSVM, Cray Urika-XC Analytics, Databricks, Quoble, Cloudera Data Science Workbench, GigaSpaces’ InsightEdge, BlueData EPIC, and more. This fall, we also announced the release of our Reinforcement Learning Coach, an open source research framework for training and evaluating RL agents on a desktop computer that doesn’t require additional hardware. Since then, we have been working hard to push Coach into new frontiers that will improve its usability for real world applications and just released a new version this week.

Intel AI Academy

Launched in late 2016 to give developers, data scientists, students, and professors the resources they to reach their AI development goals, the Intel AI Academy today has more than 100,000 members and 232,000 users visiting the site each month. With our series of online learning, webinars, live events, meetups, and free access to the Intel® AI DevCloud, the Intel AI Academy has trained more than 55,000 developers and students on AI technical topics. We are currently working with 333 universities on the Intel AI Academy for students program. There are over 150 Student Ambassadors enrolled worldwide, and they are already producing research on topics as varied as using DL to understand epileptic seizures, addressing climate change issues in Kenya, and helping people identify the most flattering hairstyle for their features.

Intel® Movidius™ Neural Compute Stick

At the CVPR conference in July, we launched the Intel® Movidius™ Neural Compute Stick, the world’s first USB-based deep learning inference kit and AI accelerator. Five months later, many thousands of developers worldwide are developing Deep Neural Networks on our flagship AI devkit and creating impactful projects like this skin cancer detection system.

AI at the Edge

We also made advancements in enabling other AI edge deployments. In the past month, Google 

announced their new AIY Vision Kit featuring the Intel® Movidius™ Vision Processing Unit (VPU), and Amazon announced their new AWS DeepLens video camera developer platform powered by the Intel Atom® processor with integrated Intel® Graphics.  DeepLens, introduced at the AWSre:Invent conference, is a fantastic combination of our hardware and optimized software.

Software

While we’re on the topic of software, this year we made some amazing strides in optimizing frameworks and developing tools that make AI implementations easier, as well as in launching new software products powered by associative memory AI. We continue to develop neon™ as Intel’s reference deep learning framework, and last month we released neon v2.3.0 which provides significant performance improvements for Deep Speech 2 (DS2) and VGG models. We’ve also open sourced our Intel® nGraph™ project, designed to be a compiler infrastructure for deep learning. Intel nGraph is at the heart of an ecosystem of optimization, hardware backends, and front-end connectors to DL frameworks. The neon frontend to nGraph enables common deep learning primitives, such as activation functions, optimizers, layers, and more.

Our software engineers have also been hard at work optimizing popular frameworks like TensorFlow*, Caffe*, Theano*, and MXnet* for Intel® Architecture-based platforms, and have seen some dramatic performance improvements – like up to a 2.1x speedup on deep learning training using system level optimizations.[2]

Intel AI Lab

There are many ways to solve data problems and we are constantly looking at new and innovative solutions.  As part of our research efforts, the Intel AI Lab, a team of machine learning and deep learning researchers and engineers, data scientists, and neuroscientists, was formed earlier this year, bringing together AI teams from across Intel. The Intel AI Lab collaborates with academic institutions and corporations and has been using our AI hardware and software to solve a number of challenges, from optimizing trading for financial services companies, to speeding up genomics analysis, to improving the chip yields of our own manufacturing processes.

Solving Real-World (and Out-of-this-World) Problems

At Intel, we strongly believe that AI will have an incredibly positive impact on our society similar to other transformations – from the industrial to the PC revolution. There are opportunities to fundamentally change people’s lives with AI.  A few examples: Intel AI technology is helping farmers optimize crop production and feed the world, working with the National Center for Missing and Exploited Children to identify and rescue victims of abuse, and assisting marine biologists with analyzing the health of the world’s’ oceans. In October we also introduced the Intel® Saffron™ Anti-Money Laundering (AML) Advisor, an associative memory solution used by the financial services industry to combat financial crime that is optimized on Intel Xeon Scalable processors.

Looking beyond our planet, we’ve been collaborating with NASA’s Frontier Development Lab to help solve space exploration challenges. For example, we developed a CNN-based algorithm to assist with lunar crater detection and labeling – and potentially determine the location of water on the moon.

Back on Earth, we are deeply involved in the AI ecosystem. In the past year we joined the Partnership on AI, a non-profit organization with the mission of advancing public understanding of AI, developing and sharing best practices amongst researchers. We also joined the Open Neural Network Exchange (ONNX) format for deep learning interoperability. We strive to provide input to thought leaders on the future of AI and last month spent time in Washington D.C. meeting with policy makers to discuss the current reality of artificial intelligence, address concerns, and describe how public policy can enable further technology advancement.

 

If you’d like to be a part of the exciting work happening at Intel, come join us! And on behalf of the entire Intel AI team, have a Happy Festivus and a joyous New Year!

 

[1] Based on Intel internal estimates

[2]TensorFlow 1.4 Training Performance (Projected TTT) Improvement with optimized affinity for cores and memory locality using 4 Workers/Node compared to current baseline with 1 Worker/Node on 2S Intel® Xeon® Platinum 8168 CPU @ 2.70GHz (24 cores), HT enabled, turbo disabled, scaling governor set to “performance” via intel_pstate driver, 192GB DDR4-2666 ECC RAM. CentOS Linux release 7.3.1611 (Core), Linux kernel 3.10.0-514.10.2.el7.x86_64. SSD: Intel® SSD DC S3700 Series. Multiple nodes connected with 10Gbit Ethernet. Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more information go to http://www.intel.com/performance. Source: Intel measured as of December 2017.

Related Blog Posts

Solving Out of This World Challenges with NASA FDL

For most people, “outside the box” doesn’t mean “out of this world”. However, NASA and Intel data scientists are using out-of-the-box thinking to jointly tackle extraterrestrial problems. Space exploration requires new ways of thinking; even simple tasks can present a dizzying array of challenges. That’s why NASA developed their Frontier Development Lab (FDL), an AI…

Read more

#News

Checking in with the Intel® AI Lab

Intel’s Artificial Intelligence Products Group has had a busy year. Last month, we announced the year-end  availability of the Intel Nervana™ Neural Network Processor, the first in a family of processors designed from the ground up for AI workloads. A few days later, we released the Reinforcement Learning Coach, an open source research framework for training…

Read more

#News

Artificial Intelligence at the Edge

Imagine being able to… … have a camera-enabled assistant monitor your aging parents to make sure they are alert and healthy … autonomously watch for product imperfections in factories without human interference … identify and locate lost hikers by using vision-enhanced drones to automatically send help … automatically recognize your petsitter and let him or…

Read more

#News