For those who attended the sold-out Artificial Intelligence Conference presented by Intel and O’Reilly Media earlier this month in San Francisco, it was clear to see that we’ve reached a tipping point with AI. The technology—once the province of specialized hardware and lab environments—is going mainstream. AI is quickly coming of age, and that evolution is at the heart of our AI vision to enable full-stack solutions of hardware and software, deployed everywhere from devices all the way to the cloud.
Intel and O’Reilly Media partnered together to put on a series of Artificial Intelligence conferences, at which real-world AI demonstrations in nearly every industry are showcased. In San Francisco, we saw deployments ranging from robotics to computer vision and speech, and extending to edge and device computing.
In Julie Choi’s keynote talk, she covered a range of AI use cases enabled by Intel, including how Ziva Software used Intel AI to bring a giant shark to the big screen in The Meg. (See it—best shark movie since Sharknado!) Using machine learning algorithms and Intel Xeon Scalable processors, Ziva slashes the time and costs for production companies to create virtual characters with lifelike appearances and movements from months to days.
Julie was joined on-stage by Ariel Pisetzky of Taboola, a leading content discovery website. They run a large-scale recommendation engine using TensorFlow* that spans seven data centers. Ariel explained how Taboola needed to speed up inferencing with a goal of 30% improvement. After evaluating available CPU and GPU options, Taboola chose Intel Xeon Scalable processors and saw a 2.5x overall performance improvement in their data center production environment using Intel-optimized TensorFlow and the Intel® Math Kernel Library compared to baseline TensorFlow. And because they use Intel® architecture throughout, Taboola can run data center web-serving applications alongside inferencing applications to reduce costs and streamline operations.
As AI is becoming a standard element of the applications businesses run day to day, the hardware to run it must be a standard element of the data center infrastructure that IT organizations operate day to day. That’s what we’re delivering with Intel® Xeon® Scalable processors—a family of processors optimized to run high-performance AI applications alongside the data center workloads they already run. By working with our partners, we’ve optimized versions of TensorFlow and other popular deep learning frameworks to fully utilize the performance our hardware can offer.
The next Artificial Intelligence Conference is a few weeks away — October 8 to 11 in London. If you’re able to attend, it’s a great way to learn how to put AI to work for your business.
1. Intel® Xeon® Platinum 8180 CPU @ 2.50GHz; 2 Sockets, 56 cores/socket, Hyper-threading ON, Turbo boost OFF, CPU Scaling governor “performance”; RAM: Samsung 192 GB DDR4@2666MHz. (16GB DIMMS x 12); BIOS: Intel SE5C620.86B.0X.01.0007.062120172125; Hard Disk: INTEL SSDSC2BX01 1.5TB OS: CentOS Linux release 7.5.1804 (Core) (3.10.0-862.9.1.el7.x86_64) Baseline TensorFlow-Serving: TensorFlow-Serving r1.9 — https://github.com/tensorflow/serving. Intel Optimized TensorFlow-Serving: TensorFlow-Serving r1.9 + Intel MKL-DNN + Optimizations. (Intel MKL-DNN + MKLML)