WEKO3
アイテム
{"_buckets": {"deposit": "7ff38391-e49b-4627-995e-e6c49253f328"}, "_deposit": {"id": "2399", "owners": [], "pid": {"revision_id": 0, "type": "depid", "value": "2399"}, "status": "published"}, "_oai": {"id": "oai:repository.dl.itc.u-tokyo.ac.jp:00002399", "sets": ["280", "330"]}, "item_7_alternative_title_1": {"attribute_name": "その他のタイトル", "attribute_value_mlt": [{"subitem_alternative_title": "人体動作と音楽の解析に基づく舞踊動作生成"}]}, "item_7_biblio_info_7": {"attribute_name": "書誌情報", "attribute_value_mlt": [{"bibliographicIssueDates": {"bibliographicIssueDate": "2007-03-22", "bibliographicIssueDateType": "Issued"}, "bibliographic_titles": [{}]}]}, "item_7_date_granted_25": {"attribute_name": "学位授与年月日", "attribute_value_mlt": [{"subitem_dategranted": "2007-03-22"}]}, "item_7_degree_grantor_23": {"attribute_name": "学位授与機関", "attribute_value_mlt": [{"subitem_degreegrantor": [{"subitem_degreegrantor_name": "University of Tokyo (東京大学)"}]}]}, "item_7_degree_name_20": {"attribute_name": "学位名", "attribute_value_mlt": [{"subitem_degreename": "博士(情報理工学)"}]}, "item_7_description_5": {"attribute_name": "抄録", "attribute_value_mlt": [{"subitem_description": "Recently, demands for synthesizing realistic human motions are rapidly increasing in computer graphics (CG) and robotics fields. One of the easy solutions to this issue is to use a motion capture system. However, it still remains difficult to capture the motion data that animators really want, and most prior work aimed to solve this problem by editing motion capture data, seamlessly blending or connecting motion capture data sets, or modifying them according to physical properties. In most cases, human movements, however, are induced by external signals: people first receive visual information such as environmental obstacles from eyes, or audio information such as speech or music from ears, and then recognize essential information or feel some emotions from the obtained information, and finally perform movements. Considering these aspects makes it possible to automatically synthesize more human-like motion, and, despite this possibility, only a few methods considering these aspects have been developed. To meet this need, we are focusing on dance performance as an experimental subject. Dance performance strongly depends on musical features such as rhythm, speed, mood, intensity, or genre of played music recognized by dance performers, and is well-suited to the issue. The ultimate goal of our study is to realize dancing-to-music ability for CG characters and humanoid robots. This dissertation describes three novel studies. The first study is to analyze the relationship between motion and musical rhythm. According to observation of human dance motion, motion rhythm is represented with stop motion called a keypose, at which dancers clearly stop their movements, and the motion rhythm is synchronized with musical rhythm to perform dance performance. The proposed method aims to reveal the relationship and consists of music analysis step that estimates musical rhythm, and motion analysis step that extract keypose candidates. By integrating these information, keyposes that are very similar to dancers\u0027understandings are extracted. The second study is to model how to modify upper body motion based on the speed of played music. When we observed structured dance motion performed at a normal music playback speed and motion performed at faster music playback speed, we found that the detail of each motion is slightly different while the whole of the dance motion is similar in both cases. This phenomenon is derived from the fact that dancers omit the details and perform the essential part of the dance in order to follow the faster music speed. To prove this, we analyzed the motion differences in the frequency domain, and obtained two insights on the omission of motion details: (1) The keyposes mentioned in the first study are preserved, and (2) High frequency components are gradually reduced depending on the musical speed. Based on these insights, we modeled the motion modification using musical rhythm and kinematic constraints that humans have. We show the effectiveness of our algorithm through experimental results. Additionally, we also developed some applications for CG character animation and humanoid robot motion generation. The third study is to automatically synthesize dance performance that is well matched to input music. People feel various emotions depending on musical mood. For example, people feel quiet and relaxed when listening to relaxing music such as a ballad, and they feel excited when listening to intense music such as hard rock music. We observed dance performance, especially original dance, and found that the same is often true for dance performance. Based on this, we designed an algorithm to synthesize new dance performance by assuming the relationship between motion and music rhythm mentioned in the first study, and the relationship between motion and music intensity. As for motion synthesis step, we propose two methods: a globally optimal method and a locally optimal method. Users can select one of them depending on their purposes. Our studies have many advances over prior work on human motion analysis and synthesis. They contribute to not only entertainment systems of CG animation and humanoid robots, but also applications for digital archive of intangible cultural heritages.", "subitem_description_type": "Abstract"}]}, "item_7_dissertation_number_26": {"attribute_name": "学位授与番号", "attribute_value_mlt": [{"subitem_dissertationnumber": "甲第22809号"}]}, "item_7_full_name_3": {"attribute_name": "著者別名", "attribute_value_mlt": [{"nameIdentifiers": [{"nameIdentifier": "6639", "nameIdentifierScheme": "WEKO"}], "names": [{"name": "白鳥, 貴亮"}]}]}, "item_7_identifier_registration": {"attribute_name": "ID登録", "attribute_value_mlt": [{"subitem_identifier_reg_text": "10.15083/00002393", "subitem_identifier_reg_type": "JaLC"}]}, "item_7_select_21": {"attribute_name": "学位", "attribute_value_mlt": [{"subitem_select_item": "doctoral"}]}, "item_7_subject_13": {"attribute_name": "日本十進分類法", "attribute_value_mlt": [{"subitem_subject": "548", "subitem_subject_scheme": "NDC"}]}, "item_7_text_22": {"attribute_name": "学位分野", "attribute_value_mlt": [{"subitem_text_value": "Information Science and Technology (情報理工学)"}]}, "item_7_text_24": {"attribute_name": "研究科・専攻", "attribute_value_mlt": [{"subitem_text_value": "Department of Information and Communication Engineering, Graduate School of Information Science and Technology (情報理工学系研究科電子情報学専攻)"}]}, "item_7_text_27": {"attribute_name": "学位記番号", "attribute_value_mlt": [{"subitem_text_value": "博情第139号"}]}, "item_7_text_36": {"attribute_name": "資源タイプ", "attribute_value_mlt": [{"subitem_text_value": "Thesis"}]}, "item_7_text_4": {"attribute_name": "著者所属", "attribute_value_mlt": [{"subitem_text_value": "大学院情報理工学系研究科電子情報学専攻"}]}, "item_creator": {"attribute_name": "著者", "attribute_type": "creator", "attribute_value_mlt": [{"creatorNames": [{"creatorName": "Shiratori, Takaaki"}], "nameIdentifiers": [{"nameIdentifier": "6638", "nameIdentifierScheme": "WEKO"}]}]}, "item_files": {"attribute_name": "ファイル情報", "attribute_type": "file", "attribute_value_mlt": [{"accessrole": "open_date", "date": [{"dateType": "Available", "dateValue": "2017-05-31"}], "displaytype": "detail", "download_preview_message": "", "file_order": 0, "filename": "shiratori.pdf", "filesize": [{"value": "14.0 MB"}], "format": "application/pdf", "future_date_message": "", "is_thumbnail": false, "licensetype": "license_free", "mimetype": "application/pdf", "size": 14000000.0, "url": {"label": "shiratori.pdf", "url": "https://repository.dl.itc.u-tokyo.ac.jp/record/2399/files/shiratori.pdf"}, "version_id": "506739a7-8298-4593-bf09-a321639deefc"}]}, "item_keyword": {"attribute_name": "キーワード", "attribute_value_mlt": [{"subitem_subject": "motion capture", "subitem_subject_scheme": "Other"}, {"subitem_subject": "auditory scene analysis", "subitem_subject_scheme": "Other"}, {"subitem_subject": "human motion synthesis", "subitem_subject_scheme": "Other"}]}, "item_language": {"attribute_name": "言語", "attribute_value_mlt": [{"subitem_language": "eng"}]}, "item_resource_type": {"attribute_name": "資源タイプ", "attribute_value_mlt": [{"resourcetype": "thesis", "resourceuri": "http://purl.org/coar/resource_type/c_46ec"}]}, "item_title": "Synthesis of Dance Performance Based on Analyses of Human Motion and Music", "item_titles": {"attribute_name": "タイトル", "attribute_value_mlt": [{"subitem_title": "Synthesis of Dance Performance Based on Analyses of Human Motion and Music"}]}, "item_type_id": "7", "owner": "1", "path": ["280", "330"], "permalink_uri": "https://doi.org/10.15083/00002393", "pubdate": {"attribute_name": "公開日", "attribute_value": "2012-03-01"}, "publish_date": "2012-03-01", "publish_status": "0", "recid": "2399", "relation": {}, "relation_version_is_last": true, "title": ["Synthesis of Dance Performance Based on Analyses of Human Motion and Music"], "weko_shared_id": null}
Synthesis of Dance Performance Based on Analyses of Human Motion and Music
https://doi.org/10.15083/00002393
https://doi.org/10.15083/0000239319172c00-326d-4dae-a2e8-efad03228e38
名前 / ファイル | ライセンス | アクション |
---|---|---|
shiratori.pdf (14.0 MB)
|
|
Item type | 学位論文 / Thesis or Dissertation(1) | |||||
---|---|---|---|---|---|---|
公開日 | 2012-03-01 | |||||
タイトル | ||||||
タイトル | Synthesis of Dance Performance Based on Analyses of Human Motion and Music | |||||
言語 | ||||||
言語 | eng | |||||
キーワード | ||||||
主題 | motion capture | |||||
主題Scheme | Other | |||||
キーワード | ||||||
主題 | auditory scene analysis | |||||
主題Scheme | Other | |||||
キーワード | ||||||
主題 | human motion synthesis | |||||
主題Scheme | Other | |||||
資源タイプ | ||||||
資源 | http://purl.org/coar/resource_type/c_46ec | |||||
タイプ | thesis | |||||
ID登録 | ||||||
ID登録 | 10.15083/00002393 | |||||
ID登録タイプ | JaLC | |||||
その他のタイトル | ||||||
その他のタイトル | 人体動作と音楽の解析に基づく舞踊動作生成 | |||||
著者 |
Shiratori, Takaaki
× Shiratori, Takaaki |
|||||
著者別名 | ||||||
識別子 | 6639 | |||||
識別子Scheme | WEKO | |||||
姓名 | 白鳥, 貴亮 | |||||
著者所属 | ||||||
著者所属 | 大学院情報理工学系研究科電子情報学専攻 | |||||
Abstract | ||||||
内容記述タイプ | Abstract | |||||
内容記述 | Recently, demands for synthesizing realistic human motions are rapidly increasing in computer graphics (CG) and robotics fields. One of the easy solutions to this issue is to use a motion capture system. However, it still remains difficult to capture the motion data that animators really want, and most prior work aimed to solve this problem by editing motion capture data, seamlessly blending or connecting motion capture data sets, or modifying them according to physical properties. In most cases, human movements, however, are induced by external signals: people first receive visual information such as environmental obstacles from eyes, or audio information such as speech or music from ears, and then recognize essential information or feel some emotions from the obtained information, and finally perform movements. Considering these aspects makes it possible to automatically synthesize more human-like motion, and, despite this possibility, only a few methods considering these aspects have been developed. To meet this need, we are focusing on dance performance as an experimental subject. Dance performance strongly depends on musical features such as rhythm, speed, mood, intensity, or genre of played music recognized by dance performers, and is well-suited to the issue. The ultimate goal of our study is to realize dancing-to-music ability for CG characters and humanoid robots. This dissertation describes three novel studies. The first study is to analyze the relationship between motion and musical rhythm. According to observation of human dance motion, motion rhythm is represented with stop motion called a keypose, at which dancers clearly stop their movements, and the motion rhythm is synchronized with musical rhythm to perform dance performance. The proposed method aims to reveal the relationship and consists of music analysis step that estimates musical rhythm, and motion analysis step that extract keypose candidates. By integrating these information, keyposes that are very similar to dancers'understandings are extracted. The second study is to model how to modify upper body motion based on the speed of played music. When we observed structured dance motion performed at a normal music playback speed and motion performed at faster music playback speed, we found that the detail of each motion is slightly different while the whole of the dance motion is similar in both cases. This phenomenon is derived from the fact that dancers omit the details and perform the essential part of the dance in order to follow the faster music speed. To prove this, we analyzed the motion differences in the frequency domain, and obtained two insights on the omission of motion details: (1) The keyposes mentioned in the first study are preserved, and (2) High frequency components are gradually reduced depending on the musical speed. Based on these insights, we modeled the motion modification using musical rhythm and kinematic constraints that humans have. We show the effectiveness of our algorithm through experimental results. Additionally, we also developed some applications for CG character animation and humanoid robot motion generation. The third study is to automatically synthesize dance performance that is well matched to input music. People feel various emotions depending on musical mood. For example, people feel quiet and relaxed when listening to relaxing music such as a ballad, and they feel excited when listening to intense music such as hard rock music. We observed dance performance, especially original dance, and found that the same is often true for dance performance. Based on this, we designed an algorithm to synthesize new dance performance by assuming the relationship between motion and music rhythm mentioned in the first study, and the relationship between motion and music intensity. As for motion synthesis step, we propose two methods: a globally optimal method and a locally optimal method. Users can select one of them depending on their purposes. Our studies have many advances over prior work on human motion analysis and synthesis. They contribute to not only entertainment systems of CG animation and humanoid robots, but also applications for digital archive of intangible cultural heritages. | |||||
書誌情報 | 発行日 2007-03-22 | |||||
日本十進分類法 | ||||||
主題 | 548 | |||||
主題Scheme | NDC | |||||
学位名 | ||||||
学位名 | 博士(情報理工学) | |||||
学位 | ||||||
値 | doctoral | |||||
学位分野 | ||||||
Information Science and Technology (情報理工学) | ||||||
学位授与機関 | ||||||
学位授与機関名 | University of Tokyo (東京大学) | |||||
研究科・専攻 | ||||||
Department of Information and Communication Engineering, Graduate School of Information Science and Technology (情報理工学系研究科電子情報学専攻) | ||||||
学位授与年月日 | ||||||
学位授与年月日 | 2007-03-22 | |||||
学位授与番号 | ||||||
学位授与番号 | 甲第22809号 | |||||
学位記番号 | ||||||
博情第139号 |