Search Results for “search”

A Progressive Batching L-BFGS Method for Machine Learning (PBQN)

The standard L-BFGS method relies on gradient approximations that are not dominated by noise, so that search directions are descent directions, the line search is reliable, and quasi-Newton updating yields useful quadratic models of the objective function. All of this appears to call for a full batch approach, but since small batch sizes give rise…

View Publication

HyperNet: Towards Accurate Region Proposal Generation and Joint Object Detection

Almost all of the current top-performing object detection networks employ region proposals to guide the search for object instances. State-of-the-art region proposal methods usually need several thousand proposals to get high recall, thus hurting the detection efficiency. Although the latest Region Proposal Network method gets promising detection accuracy with several hundred proposals, it still struggles…

View Publication

Using Artificial Intelligence for Crop Production

What is the Greenhouse Challenge? Recently, the “Deep Greens” team, comprised of Intel AI data scientists and horticultural experts from the Universidad Nacional Autónoma de México (UNAM), competed in, and won, a 24-hour hackathon for the chance to win one of 5 slots to grow cucumbers in an autonomous greenhouse later this year. The competition…

Read More

The Future of Retail is All About Artificial Intelligence

Artificial Intelligence is an engine that is poised to drive the future of retail to all-new destinations. We live in an era where a tremendous amount of data is being generated online and offline. However, access to larger datasets doesn’t lead to improved business results. The key to success is the ability to extract meaning…

Read More

Much Deeper, Much Faster: Deep Neural Network Optimization with SigOpt and Nervana Cloud

Tools like neon, Caffe, Theano, and TensorFlow make it easier than ever to build custom neural networks and to reproduce groundbreaking research. Current advancements in cloud-based platforms like the Nervana Cloud enable practitioners to seamlessly build, train, and deploy these powerful methods. Finding the best configurations of these deep nets and efficiently tuning their parameters, however, remains one of the most limiting aspects…

Read More

Toward Higher-Order Training: A Progressive Batching L-BFGS Method

Stochastic Gradient Descent and its variants, referred here collectively as SGD, have been the de facto methods in training neural networks. These methods aim to minimize a network-specific loss function F(x) whose lower values correspond to better-trained versions of the neural network in question. To find a minimal point x*, SGD relies solely on knowing the…

Read More

1 2 3