Toward Deep Analysis of Human Actions in Unconstrained Environme

发布时间:2016-04-19  阅读次数:
 摘要:Human action understanding from video is receiving extensive research interests in computer vision nowadays due to its wide applications in surveillance, human-computer interface, content based video retrieval etc. The challenges of action understanding come from background clutter, viewpoint changes, and motion and appearance variations. In this talk, I will report our continuous efforts (CVPR13, ICCV13, CVPR 14, ECCV 14, CVPR15, TIP14, IJCV 15, CVPR 16) to address these challenges. These works range from mining middle level parts, multi-view encoding of local descriptors, hierarchical model, to utilizing deep networks for action recognition and detection. Experimental results on large public datasets (e.g. UCF101, HMDB51) demonstrate the effectiveness of the proposed methods. In addition, I will give a brief overview on our group’s recent progresses in the area of computer vision and deep learning.
 
报告人简介:Yu Qiao received Ph.D from the University of Electro-Communications, Japan, in 2006. He was a JSPS fellow and then a project assistant professor with the University of Tokyo from 2007 to 2010. Now he is a professor with the Shenzhen Institutes of Advanced Technology (SIAT), the Chinese Academy of Science, and the deputy director of multimedia research lab. His research interests include computer vision, speech processing, pattern recognition, and deep learning. He has published more than 110 papers in these fields. He received the Lu Jiaxi young researcher award from the Chinese Academy of Science.
 
 
                                                                     联系人:张林
 

联系我们

地址:中国 上海曹安公路4800号威廉希尔WilliamHill

邮编:201804

联系电话:86-21-69589585,69589332(FAX)

Copyright© 威廉希尔·WilliamHill(足球l)中文官方网站 版权所有