Breaking Barriers between Model and Reality

See Intel AI in action at the O’Reilly AI Conference in San Francisco

The AI Conference starts today! Those attending are promised a wide variety of real-world use cases from companies around the globe that are putting AI to work. As co-sponsor of the event, Intel AI will host technical sessions and demos showcasing customer deployments and solutions on a variety of Intel AI hardware and software built to break through memory and power bottlenecks, from real-time object detection on drones to classifying dense medical images. Attendees will learn how Intel AI turns theory into real-world function.

Keynote: Beyond Hype – AI in the Real World

Julie Choi, Intel’s Head of AI Marketing, reviews real-world customer use cases that take AI from theory to reality. From accelerating drug discovery with deep learning to changing the way visual effects are created with machine learning, Intel AI is working side-by-side with a diverse range of organizations to accelerate their AI transformation.

Speaker: Julie Choi, Intel
Date: September 6
Time: 8:50 – 9:05 a.m.
Location: Continental Ballroom 4-6

Keynote: Accelerating AI on Intel® Xeon® Processors through Software Optimization

Huma Abidi, Engineering Director for the Intel AI Products Group, will discuss the importance of optimization to deep learning frameworks. As AI evolves, it is essential to have a full-stack solution where software optimizations take advantage of hardware innovations to accelerate AI applications. Partnering with framework developers is a critical component of Intel’s AI strategy to take machine and deep learning models from theory to reality. This talk will include Intel Xeon processor performance results and work Intel is doing with frameworks like TensorFlow.

Speaker: Huma Abidi, Intel
Date: September 7
Time: 9:10 – 9:20 a.m.
Location: Continental Ballroom 4-6


Session: Neural Network Distiller: A PyTorch Environment for Neural Network Compression

Deep learning applications employ deep neural networks (DNNs), which are notoriously time, compute, energy, and memory intensive.

Intel’s AI Lab has recently open-sourced Neural Network Distiller, a Python package for neural network compression research. Distiller provides a PyTorch environment for prototyping and analyzing compression algorithms, such as sparsity-inducing methods and low-precision arithmetic. Intel AI is exploring how DNN compression can be another catalyst that brings deep learning innovation to more industries and application domains, making our lives easier, healthier, and more productive.

Neta Zmora, a Deep Learning Research Engineer in the AI Products Group, discusses the motivation for compressing DNNs, outlines compression approaches, and explores Distiller’s design and tools, supported algorithms, and code and documentation. Neta concludes with an example implementation of a compression research paper.

Speaker: Neta Zmora, Intel
Date: September 6
Time: 11:05 – 11:45 a.m.
Location: Continental 7-9


Session: Trends in AI Systems

One of the biggest challenges in AI is how to translate advances in the lab into large-scale applications. This challenge sits at the intersection of AI and systems engineering and requires an integrated understanding of all of the components that make up a large machine learning-based system, including computation, storage, communications, and algorithms. Data scientist Casimir Wierzynski reviews current trends in the field and shares case studies to illustrate why codesigning these components in concert will be critical for building the AI systems of the future.

Speaker: Casimir Wierzynski, Intel
Date: September 6
Time: 11:55 a.m. – 12:35 p.m.
Location: Continental 7-9


Session: Accelerating Deep Learning Inference using OpenVINOTM across Intel Platforms

The OpenVINO™ toolkit is free software that helps computer vision teams speed the development and deployment of neural network applications on devices and gateways across multiple Intel® platforms (CPU, GPU, FPGA, VPU). In this session, Dmitry Rizshkov, a Machine Learning Engineer in the AI Products Group, will introduce OpenVINO through real customer case studies that excelled in challenging inference applications.

Speaker: Dmitry Rizshkov, Intel
Date: September 6
Time: 4:30 – 5:30 p.m.
Location: Franciscan BCD


Session: Efficient Neural Network Training on Intel® Xeon® Processor-based Supercomputers

Vikram Saletore, a Principal Engineer and Performance Architect in the AI Products Group, and Luke Wilson, a Data Scientist and Artificial Intelligence Researcher in Dell EMC’s HPC and AI Engineering Group, discuss a collaboration between SURFSara and Intel as part of the Intel Parallel Computing Center initiative to advance the state of large-scale neural network training on Intel Xeon CPU-based servers. SURFSara and Intel evaluated a number of data and model parallel approaches and synchronous versus asynchronous SGD methods with popular neural networks, such as ResNet50 using large datasets on the TACC (Texas Advanced Computing Center) and Dell HPC supercomputers.

Vikram and Luke share insights on several best-known methods, including CPU core, memory pinning, and hyperparameter tuning, that were developed to demonstrate top-one/top-five state-of-the-art accuracy at scale. They then detail real-world problems that can be solved by utilizing models efficiently trained at large-scale and present tests performed at Dell EMC on CheXNet, a Stanford University project that extends a DenseNet model pre-trained on the large-scale ImageNet dataset to detect pathologies in chest X-ray images, including pneumonia. Vikram and Luke highlight improved time-to-solution on extended training of this pre-trained model and the various storage and interconnect options that lead to more efficient scaling.

Speakers: Vikram Saletore, Intel and Lucas Wilson, Dell EMC
Date: September 7
Time: 11:05 – 11:45 a.m.
Location: Continental 7-9


Session: Portability and Performance in Embedded Deep Learning: Can We Have Both?

Recently, a lot of work has been done on low-precision inference, demonstrating that by training for quantization, large gains in energy efficiency can be achieved. On the other hand, we have seen embedded runtime packages like TensorFlow Lite and Caffe2Go emerge that offer portability over a number of platforms. Cormac Brick, Director of Machine Intelligence in the Movidius Group, looks at the challenge presented by this choice and asks, “Why can’t we have both?” Cormac explains how big this gap truly is, using state-of-the-art methods for both approaches, and specifically trained networks to show performance over a range of popular vision applications. ­He then covers best-in-class design techniques for developing portable networks to maximize performance on a variety of architectures and shares industry challenges and progress needed to close the portability performance gap.

Speaker: Cormac Brick, Intel
Date: September 7
Time: 11:55 a.m. – 12:35 p.m. 
Location: Continental 7-9


Visit Intel in Booth 101 

See real-world use cases that highlight a variety of Intel AI hardware built for different application needs. We’re also demonstrating technologies from Intel® AI Builders partners, who share a vision to accelerate deployment of AI on Intel® architecture. Here’s a preview of the demos you can expect to see:

  • Real Time Object Detection at the Edge: Intel Movidius Neural Compute Stick performs real-time object detection using SSD MobileNet. In addition, there will be a handwriting recognition demo and a face/age detection application.
  • Intel® AI Academy: The Intel AI Academy offers education and collaboration opportunities for developers, data scientists and academics.
  • Lenovo: Manufacturing Quality Control Made Easier – The Product Quality Evaluation demo uses AI to evaluate the quality of a unit of product for an enterprise.  We will be demonstrating it using a soda/pop can moving along a conveyor belt. The AI model based on computer vision will detect defects in the can surface or coloring of the label.
  • Wipro: Segmenting, Classifying and Labeling Medical Images – Doctors and Physicians use Chest X-ray as one of the most frequent and cost-effective medical imaging examinations. However, clinical diagnosis of a chest X-ray can be challenging and sometimes more difficult than diagnosis via chest CT imaging. We are using Machine learning to tackle two problems in healthcare sector: Medical Image/X-ray Segmentation and Medical Image Classification and automated label generation
  • Domino Data Lab: Accelerating the Data Science Lifecycle in the Cloud – Domino shows how to accelerate the entire data science lifecycle, from exploratory analysis to managing production models. Easily scale hardware and manage software environments, track work and monitor resources, automate model deployment and monitoring.
  • Vision Ingenii: Combating the Rising Hazard, Detecting Fires and Monitoring Fire Patterns:
    • Fire Detection – identifying fire pattern and raising the corresponding level of severity
    • Pedestrian-Face Detection – identifying pedestrians and localizing face on the detected pedestrian
    • Specific or defined object detection – detecting object(s) of interest set by the user and mining information against the detected object for the user to take necessary actions
  • PanaCast Intelligent Vision System: Immersive Video Experience with People and Object Detection


Other AI Conference Activities

Join us for the AI at Night party where you can network with other attendees while enjoying happy hour food and beverage and listening to a live DJ. The party happens September 6 from 6:45 p.m. – 9:30 p.m. (open to all conference attendees, bring your badge for admission).


Follow @IntelAI for More

We hope to see you at the conference! Make sure to follow @IntelAI for all of the news and info from The AI Conference, and tag us using #intelai and #theAIconf with Tweets from the show!