Home | Gigavision Workshop

Speakers

Ming-Hsuan Yang

University of California at Merced

Ming-Hsuan Yang is a research scientist at Google and a professor in Electrical Engineering and Computer Science at University of California, Merced. He received the PhD degree in Computer Science from the University of Illinois at Urbana-Champaign in 2000. He serves as an area chair for several conferences including IEEE Conference on Computer Vision and Pattern Recognition, IEEE International Conference on Computer Vision, European Conference on Computer Vision, Asian Conference on Computer, and AAAI National Conference on Artificial Intelligence. He serves as a program co-chair for IEEE International Conference on Computer Vision in 2019 as well as Asian Conference on Computer Vision in 2014, and general co-chair for Asian Conference on Computer Vision in 2016. He serves as an associate editor of the IEEE Transactions on Pattern Analysis and Machine Intelligence (2007 to 2011), International Journal of Computer Vision, Computer Vision and Image Understanding, Image and Vision Computing, and Journal of Artificial Intelligence Research. Yang received the Google faculty award in 2009, and the Distinguished Early Career Research Award from the UC Merced senate in 2011, the Faculty Early Career Development (CAREER) award from the National Science Foundation in 2012, and the Distinguished Research Award from UC Merced Senate in 2015. He receives paper awards from IEEE Conference on Computer Vision and Patter Recognition in 2018, Asian Conference on Computer Vision in 2018, and ACM Symposium on User Interface Software and Technology in 2017. He is an IEEE Fellow.

 

Ajmal Mian

The University of Western Australia

Ajmal Mian is a Professor of Computer Science. He is internationally known for his research in Computer Vision and has published over 160 scientific papers in prestigious journals and conferences including PAMI, IJCV, TIP, PR, TNNLS, CVPR and ECCV. He was a Guest Editor for high impact journals namely Pattern Recognition, Computer Vision & Image Understanding and Image & Vision Computing. He has received several research awards including the West Australian Early Career Scientist of the Year Award, Excellence in Research Supervision Award, Vice-chancellor’s Mid-career Research Award, IAPR Best Scientific Paper Award, Aspire Professional Development Award, Outstanding Young Investigator Award and the Australasian Distinguished Doctoral Dissertation Award. He is a General Chair for ACCV 2018 and has served as a Program Chair for DICTA 2012 and Area Chair of WACV 2019, WACV 2018, ICPR 2016, ACCV 2014. He has received two prestigious and competitive fellowships from the Australian Research Council (ARC). He has secured nine major grants from the ARC and the National Health and Medical Research Council of Australia with about $12 Million in funding. His research interests include computer vision, machine learning, deep learning, 3D data analysis, face recognition, human action recognition and remote sensing.

 

Yezhou Yang

Arizona State University

Yezhou Yang is an Assistant Professor at School of Computing, Informatics, and Decision Systems Engineering, Arizona State University. He is directing the ASU Active Perception Group. His primary interests lie in Cognitive Robotics, Computer Vision, and Robot Vision, especially exploring visual primitives in human action understanding from visual input, grounding them by natural language as well as high-level reasoning over the primitives for intelligent robots; His research mainly focused on solutions to visual learning, which significantly reduces the time to program intelligent agents. These solutions involve Computer Vision, Deep Learning, and AI algorithms to interpret peoples’ actions and the scene’s geometry. His research draws on the strengths of the symbolic approach, connectionism, and dynamicism.

 

Boqing Gong

Google

Boqing Gong is a research scientist at Google, Seattle and a principal investigator at ICSI, Berkeley. His research in machine learning and computer vision focuses on data- and label-efficient learning (e.g., domain adaptation, few-shot, reinforcement, webly-supervised, and self-supervised learning) and the visual analytics of objects, scenes, human activities, and their attributes. Before joining Google in 2019, he worked in Tencent and was a tenure-track Assistant Professor at the University of Central Florida (UCF). He received an NSF CRII award in 2016 and an NSF BIGDATA award in 2017, both of which were the first of their kinds ever granted to UCF. He is/was a (senior) area chair of NeurIPS 2019, ICCV 2019, ICML 2019, AISTATS 2019, WACV 2018--2020, and AAAI 2020. He received his Ph.D. in 2015 at the University of Southern California, where the Viterbi Fellowship partially supported his work.

 

Chen Qian

SenseTime

Chen QIAN, currently the Research Director of SenseTime, is responsible for leading the team in 3D vision and human analysis. Its technology is widely used in the Top 4 mobile companies in China, APPs both home and abroad in augmented realtity, video sharing and live streaming. He has published several papers on top conferences and journals, e.g. CVPR, AAAI, ECCV and IJCA, including two oral presentations and one spotlight presentation.He has also led the team to achive the 1st place in the competition of Face Identification and Face Verification in Megaface Challenge .

 

Zhiding Yu

NVIDIA

Zhiding Yu joined NVIDIA Research as a Research Scientist in 2018. Before that, he obtained Ph.D. in ECE from Carnegie Mellon University in 2017, and M.Phil. in ECE from The Hong Kong University of Science and Technology in 2012. His current research interests mainly focus on deep representation learning, weakly/semi-supervised learning, transfer learning and structured prediction, with their applications to semantic/instance segmentation, object detection, boundary detection and domain adaptation/generalization etc. He is a winner of the Domain Adaptation for Semantic Segmentation Challenge in Workshop on Autonomous Driving (WAD) at CVPR18. He is a co-author of the best student paper at ISCSLP14, and winner of the best paper award at WACV15. He was twice awarded the HKTIIT Post-Graduate Excellence Scholarships in 2010 and 2012. His intern work on deep facial expression recognition at Microsoft Research won first runner-up at the EmotiW-SFEW Challenge 2015 and was integrated into the Microsoft Emotion Recognition API under the Microsoft Azure Cognitive Services.

Schedule

13:00-13:10 Welcome and Opening Remarks
13:10-13:50 Keynote Talk 1
Title: Learning to Track and Segment Multiple Objects in Videos
Speaker: Ming-Hsuan Yang, University of California at Merced, USA
13:50-14:30 Keynote Talk 2
Title: Precision Modeling of 3D Human Motion for Behavioural and Performance Analysis
Speaker: Ajmal Mian, The University of Western Australia, Australia
14:30-15:00 Invited Talk 1
Title: Visual Recognition with Knowledge (VR-K): from an Active Agent's Perspective
Speaker: Yezhou Yang, Arizona State University, USA
15:00-15:30 Invited Talk 2
Title: Gaussian Attacks on the Deep Neural Networks with High-Resolution inputs in A Low-Dimensional Space by Learning the Distribution of Adversarial Examples
Speaker: Boqing Gong, Google, USA
15:30-16:00 Coffee Break
16:00-16:30 Invited Talk 3
Title: Multi-person articulated tracking framework
Speaker: Chen Qian, SenseTime, China
16:30-17:00 Invited Talk 4
Title: Towards Weakly-Supervised Visual Scene Understanding
Speaker: Zhiding Yu, NVIDIA, USA
17:00-17:45 Invited Paper Presentation
Paper: AdaFrame: Adaptive Frame Selection for Fast Video Recognition
Speaker: Zuxuan Wu, University of Maryland, USA

Paper: Adaptive NMS: Refining Pedestrian Detection in a Crowd
Speaker: Songtao Liu, Beihang University, China

Paper: Collaborative Global-Local Networks for Memory-Efficient Segmentation of Ultra-High Resolution Images
Speaker: Wuyang Chen, Texas A&M University, USA
17:45-17:50 Closing Remarks