Jul 25, 2016   |   Scott Gray

Still not slowing down: Benchmarking optimized Winograd implementations

By: Scott Gray and Urs Köster This is part 3 of a series of posts on using the Winograd algorithm to make convolutional networks faster than ever before. In the second part we provided a  technical overview of how the algorithm works. Since the first set of Winograd kernels in neon, which we described in…

Read more

#neon

Mar 04, 2016   |   Scott Gray

"Not so fast, FFT": Winograd

Deep learning thrives on speed. Faster training enables the construction of larger and more complex networks to tackle new domains such as speech or decision making. Recently, small convolutional filter sizes have become an important component in convolutional neural networks such as Google’s AlphaGo network or Microsoft’s deep residual networks. While most convolutions are computed…

Read more

#neon

Stay Connected

Get the latest from Intel AI