Intel AI Research at CVPR 2018

The 2018 Conference of Computer Vision and Pattern Recognition (CVPR) takes place June 18th-22nd in Salt Lake City, Utah, USA. CVPR is known as the premier annual computer vision event consisting of poster sessions, co-located workshops, and tutorials. Intel’s presence at CVPR consists of 12 accepted papers/poster sessions, one competition, one Intel AI sponsored Doctoral Consortium, two workshops, and an Intel AI Academy DevJam event.

Intel AI Academy DevJam

Sunday, June 17—6pm-9pm
Elevate, 149 200 S. Salt Lake City, UT

Experience these demos at the Intel® AI DevJam:

  • Using AI for Clean Water Detection- Identifies bacteria in the water
  • Robotic Learning – A robotic hand learns to classify objects in 3D space by “feel” alone.
  • Amazon Deep Lens – Learn more about deep learning inference
  • Innovator using Deep Lens – identify birds via their song and more
  • Windows* ML AI- Image recognition
  • Lexset- where should you put the furniture and lights in your room?
  • Celebrity Face – What famous person do you look like?
  • Orbii – Helps you identify possible threats using a smart camera
  • Landmark Detection – Used as you drive, this camera that can identify landmarks and display information

In addition, you can talk with our Intel® Student Ambassadors and Intel developers and also network with your peers.

Register Now >


Low Power Image Recognition Competition 2018

Monday, June 18th, 2018

The 4th IEEE International Low-Power Image Recognition Challenge (LPIRC) will be held in Salt Lake City, Utah, co-located with CVPR. This year, team contributions are being encouraged for 3 different tracks:

  • Track 1: Teams submit their models in TfLite format before CVPR for image classification. This track focuses on accuracy and execution time on a fixed compute platform.
  • Track 2: Teams submit their programs before CVPR for object detection. The organizers will execute the programs on Nvidia TX2 and measure the accuracy and energy consumption.
  • Track 3: This is the same track as in 2017. Participants bring their systems to an on-site competition for object detection. There is no restriction on the hardware (except Nvidia TX2) or software. This track is the same as the previous years.

Prizes in each track: $2,000 for first prize, $1,000 for second prize, $500 for third prize.

Submission for Tracks 1 and 2 have ended. Track 3 is on-site and participants need to bring their own systems.


Booth & Demos

Find Intel AI at booth #1337 on Tuesday, June 19th-Thursday, June 21st from 10:00am–6:30pm

Tiramisu DenseNet Architecture for Precise Segmentation
Amlaan Bhoi

We use a subset of the CVPR 2018 WAD Video Segmentation Challenge dataset [1] to pre-train a Tiramisu DenseNet. The architecture is based on the model described in [2]. The DenseNet utilizes concatenated skip connections at each scale of the image. This eases the training process and helps the network receive information from initial stages. Furthermore, with dense connectivity at each of the blocks, we find that these networks are able to focus on fine details relevant to features at that timescale. Due to skip connections, these fine details are propagated to the final prediction stage as well. The model is trained upon Intel® Optimization for TensorFlow* and Intel® Distribution for Python*.

Clean Water AI
Peter Ma

This demo uses Deep Learning Networks to track water contamination, specifically image classification and object recognition via Convolutional Neural Networks. Current water contamination uses chemical-based sensors and is very effective at detecting chemical contamination, but not bacteria. Clean Water AI is built using Caffe* network and running AI on the edge through optical detection paired with high-speed cameras. This allows the system to classify bacteria and other contaminations in near real time.

Emotion Detection
Justin Shenk

Computer vision-based emotion detection is a new field, made possible by increased computational power and advances in algorithmic design. Classification of facial expressions as one of several emotions (eg, happy, surprise) on IoT devices is made possible with the Intel® Movidius™ Neural Compute Stick, which has a very low power envelop (1 Watt), allowing real-time classification at the edge. This model was trained on a deep convolutional network with TensorFlow on the FER 2013 dataset.

RLCoach + SenseNet
Jason Toy

The majority of artificial intelligence research, as it relates to biological senses, has been focused on vision. The recent explosion of machine learning and in particular, deep learning, can be partially attributed to the release of high-quality data sets for algorithms from which to model the world on. Thus, most of these datasets are comprised of images. We believe that focusing on sensorimotor systems and tactile feedback will create algorithms that better mimic human intelligence. Here we present SenseNet: a collection of tactile simulation environments for 3D object manipulation. SenseNet was created for the purpose of researching and training Artificial Intelligences (AIs) to interact with the environment via sensorimotor neural systems and tactile feedback. We aim to accelerate that same explosion in image processing, but for the domain of tactile feedback and sensorimotor research. We hope that SenseNet can offer researchers in both the machine learning and computational neuroscience communities brand new opportunities and avenues to explore.

Amazon* DeepLens
Paul Langdon

As the field of Computer Vision become commoditized and widely available to secondary markets, how do you start to explore the technology and techniques used, and more importantly convince your boss to start letting build some proof of concepts to keep your industry competitive. We will explore a range of these technologies that can easily be jumpstarted with well-documented use cases, on a variety of hardware that is both widely available and within many discretionary spending budgets. The models you create and extend scale to target platform, whether it be low power edge devices or more robust edge gateways. Some of the frameworks used include TensorFlow and MXNet*. The hardware platform includes AWS DeepLens, Up² Up Board, Raspberry Pi Zero and Intel® Movidius™ technology.

Fully Convolutional Model for Variable Bit Length and Lossy High-Density Compression of Mammograms
Aupendu KarSri Phani Krishna KarriNirmalya GhoshRamanathan SethuramanDebdoot Sheet

Early works on medical image compression date to the 1980’s with the impetus on deployment of teleradiology systems for high-resolution digital X-ray detectors. Commercially deployed systems during the period could compress 4,096 x 4,096 sized images at 12 bpp to 2 bpp using lossless arithmetic coding, and over the years JPEG and JPEG2000 were imbibed reaching upto 0.1 bpp. Inspired by the reprise of deep learning based compression for natural images over the last two years, we propose a fully convolutional autoencoder for diagnostically relevant feature preserving lossy compression. This is followed by leveraging arithmetic coding for encapsulating high redundancy of features for further high-density code packing leading to variable bit length. We demonstrate performance on two different publicly available digital mammography datasets using peak signal-to-noise ratio (pSNR), structural similarity (SSIM) index and domain adaptability tests between datasets. At high-density compression factors of >300x (~0.04 bpp), our approach rivals JPEG and JPEG2000 as evaluated through a Radiologist’s visual Turing test.


Papers & Poster Sessions

Tuesday, June 19th:

  1. 10:15am–12:30pm, Halls C-E, I13: Interactive Image Segmentation with Latent Diversity
    Zhuwen Li, Qifeng Chen, Vladlen Koltun
  2. 12:30pm–2:45pm, Halls C-E, K21: Matryoshka Networks: Predicting 3D Geometry via Nested Shape Layers 
    Stephan R. Richter, Stefan Roth
  3. 4:30pm–6:30pm, Halls C-E, N7: Learning to See in the Dark 
    Chen Chen, Qifeng Chen, Jia Xu, Vladlen Koltun
  4. 4:30pm–6:30pm, Halls C-E, H1: Efficient, Sparse Representation of Manifold Distance Matrices for Classical Scaling 
    Javier S. Turek, Alexander Huth
  5. 4:30pm–6:30pm, Halls C-E, H4: Motion Segmentation by Exploiting Complementary Geometric Models  
    Xun Xu, Loong-Fah Cheong, Zhuwen Li

Wednesday, June 20th:

  1. 10:15am–12:30pm, Halls C-E, F4: Tangent Convolutions for Dense Prediction in 3D
    Maxim Tatarchenko, Jaesik Park, Vladlen Koltun, and Qian-Yi Zhou
  2. 12:30pm–2:45pm, Halls C-E, B3: Single Image Reflection Separation with Perceptual Losses 
    Xuaner Zhang, Ren Ng, Qifeng Chen

Thursday, June 21st:

  1. 10:15am–12:30pm, Halls D-E, G4: Learning Visual Knowledge Memory Networks for Visual Question Answering 
    Zhou Su, Chen Zhu, Dongqi Cai, Yinpeng Dong, Yurong Chen, Jianguo Li (corresponding author)
  2. 4:30pm–6:30pm, Halls D-E, I15: Boosting Adversarial Attacks with Momentum 
    Yinpeng Dong, Fangzhou Liao, Tianyu Pang, Hang Su, Jun Zhu, Xiaolin Hu, Jianguo Li
  3. 4:30pm–6:30pm, Halls D-E, F19: Deep Learning under Privileged Information Using Heteroscedastic Dropout 
    John Lambert, Ozan Sener, Silvio Savarese
  4. 4:30pm–6:30pm, Halls D-E, F3: Semi-Parametric Image Synthesis 
    Xiaojuan Qi, Qifeng Chen, Jiaya Jia, Vladlen Koltun
  5. 4:30pm–6:30pm, Halls D-E, K21: Explicit Loss-Error-Aware Quantization for Low-Bit Deep Neural Networks
    Aojun Zhou, Anbang Yao, Kuan Wang, Yurong Chen

Doctoral Consortium

Intel sponsored

June 20th, 2018, 12:30pm-2:30pm

The Doctoral Consortium provides a unique opportunity for students, who are close to finishing or who have recently finished their doctorate degree, to interact with experienced researchers in computer vision. A senior member of the community will be assigned as a mentor for each student based on the student’s preference or similarity of research interests. All students and mentors will attend a Doctoral Consortium meeting/luncheon, giving the students an opportunity to discuss their ongoing research and career plans with their mentor. In addition, each student will present a poster, either describing their thesis research or a single recent paper, to the other participants and their mentors.


Women in Computer Vision (WiCV) Workshop

Intel sponsored

June 22nd, 2018

This all-day workshop for both men and women is open to researchers of all levels.

 The goals of this workshop are to:

  • Raise visibility of female computer vision researchers by presenting invited research talks by women who are role models in this field.
  • Give opportunities to junior female students or researchers to present their work via a poster session and travel awards.
  • Share experience and career advice for female students and professionals.

 

 

References

[1] CVPR 2018 WAD. CVPR 2018 WAD Video Segmentation Challenge, 2018 (accessed April 28, 2018).

[2] Simon J´egou, Michal Drozdzal, David Vazquez, Adriana Romero, and Yoshua Bengio. The one hundred layers tiramisu: Fully convolutional DenseNets for semantic segmentation. In Computer Vision and Pattern Recognition Workshops (CVPRW), 2017 IEEE Conference on, pages 1175-1183. IEEE, 2017.

Notices and Disclaimers

Intel, the Intel logo, and Movidius are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries.

*Other names and brands may be claimed as the property of others.

© Intel Corporation.