From hardware that excels at training massive, unstructured data sets, to extreme low-power silicon for on-device inference, Intel AI supports cloud service providers, enterprises and research teams with a portfolio of multi-purpose, purpose-built, customizable and application-specific hardware that turn model into reality.
Intel® Xeon® Scalable Processor

Intel® Xeon® Scalable processors are the first generation of our platform built specifically to run high-performance AI workloads—alongside the cloud and HPC workloads they already run.

Intel® FPGAs

Intel® field programmable gate arrays (FPGAs) are blank, modifiable canvases. Their purpose and power can be easily adapted again and again for any number of workloads and a wide range of structured and unstructured data types.

Intel® Movidius™ Vision Processing Units (VPUs)

The Intel® Movidius™ Myriad™ VPUs offer industry leading performance per watt for demanding AI inference workloads on edge devices. These system-on-chips (SoC) are designed specifically for on-device advanced computer vision and neural network applications.

Intel® Nervana™ NNP

The Intel® Nervana™ Neural Network Processor (NNP) is a purpose built architecture designed from the ground up to power deep learning while making core hardware components as efficient as possible.


The new, improved Intel® Neural Compute Stick 2 (Intel® NCS 2) features Intel’s latest high-performance vision processing unit, the Intel® Movidius™ Myriad™ X VPU. With more compute cores and a dedicated hardware accelerator for deep neural network inference, the Intel® NCS 2 delivers a significant the performance boost compared to the previous generation Intel® Movidius™ Neural Compute Stick.