TY - GEN
T1 - "Clustering of dancelets" - Towards video recommendation based on dance styles
AU - Han, Tingting
AU - Yao, Hongxun
AU - Sun, Xiaoshuai
AU - Zhang, Yanhao
AU - Zhao, Sicheng
AU - Lu, Xiusheng
AU - Huang, Yinghao
AU - Xie, Wenlong
N1 - Publisher Copyright:
© 2015 ACM.
PY - 2015/10/13
Y1 - 2015/10/13
N2 - Dance is a special and important type of action, composed of abundant and various action elements. However, the recommendation of dance videos on the web are still not well studied. It is hard to realize it in the way of traditional methods using associated texts or static features of video content. In this paper, we study the problem focusing on extraction and representation of action information in dances. We propose to recommend dance videos based on the automatically discovered "Dance Styles", which play a significant role in characterizing different types of dances. To bridge the semantic gap of video content and mid-level concept, style, we take advantage of a mid-level action representation method, and extract representative patches as "Dancelets", a sort of intermediation between videos and the concepts. Furthermore, we propose to employ Motion Boundaries as saliency priors and sparsely extract patches containing more representative information to generate a set of dancelet candidates. Dancelets are then discovered by Normalizedcut method, which is superior in grouping visually similar patterns into the same clusters. For the fast and effective recommendation, a random forest-based index is built, and the ranking results are derived according to the matching results in all the leaf notes. Extensive experiments validated on the web dance videos demonstrate the effectiveness of the proposed methods for dance style discovery and video recommendation based on styles.
AB - Dance is a special and important type of action, composed of abundant and various action elements. However, the recommendation of dance videos on the web are still not well studied. It is hard to realize it in the way of traditional methods using associated texts or static features of video content. In this paper, we study the problem focusing on extraction and representation of action information in dances. We propose to recommend dance videos based on the automatically discovered "Dance Styles", which play a significant role in characterizing different types of dances. To bridge the semantic gap of video content and mid-level concept, style, we take advantage of a mid-level action representation method, and extract representative patches as "Dancelets", a sort of intermediation between videos and the concepts. Furthermore, we propose to employ Motion Boundaries as saliency priors and sparsely extract patches containing more representative information to generate a set of dancelet candidates. Dancelets are then discovered by Normalizedcut method, which is superior in grouping visually similar patterns into the same clusters. For the fast and effective recommendation, a random forest-based index is built, and the ranking results are derived according to the matching results in all the leaf notes. Extensive experiments validated on the web dance videos demonstrate the effectiveness of the proposed methods for dance style discovery and video recommendation based on styles.
KW - Dance Style
KW - Dancelets Mining
KW - Normalizedcut
KW - Random Forest-based Index
KW - Video Recommendation
UR - https://www.scopus.com/pages/publications/84962892168
U2 - 10.1145/2733373.2806363
DO - 10.1145/2733373.2806363
M3 - 会议稿件
AN - SCOPUS:84962892168
T3 - MM 2015 - Proceedings of the 2015 ACM Multimedia Conference
SP - 915
EP - 918
BT - MM 2015 - Proceedings of the 2015 ACM Multimedia Conference
PB - Association for Computing Machinery, Inc
T2 - 23rd ACM International Conference on Multimedia, MM 2015
Y2 - 26 October 2015 through 30 October 2015
ER -