WEKO3
アイテム
{"_buckets": {"deposit": "a604df45-2211-4ef1-aff8-c0c9ebba0b9f"}, "_deposit": {"id": "4067", "owners": [], "pid": {"revision_id": 0, "type": "depid", "value": "4067"}, "status": "published"}, "_oai": {"id": "oai:repository.dl.itc.u-tokyo.ac.jp:00004067", "sets": ["234", "262"]}, "item_7_alternative_title_1": {"attribute_name": "その他のタイトル", "attribute_value_mlt": [{"subitem_alternative_title": "Eigenvoiceに基づくキャラクター変換とその評価"}]}, "item_7_biblio_info_7": {"attribute_name": "書誌情報", "attribute_value_mlt": [{"bibliographicIssueDates": {"bibliographicIssueDate": "2012-09-27", "bibliographicIssueDateType": "Issued"}, "bibliographic_titles": [{}]}]}, "item_7_date_granted_25": {"attribute_name": "学位授与年月日", "attribute_value_mlt": [{"subitem_dategranted": "2012-09-27"}]}, "item_7_degree_name_20": {"attribute_name": "学位名", "attribute_value_mlt": [{"subitem_degreename": "修士(情報理工学)"}]}, "item_7_description_5": {"attribute_name": "抄録", "attribute_value_mlt": [{"subitem_description": "This thesis describes a new method of voice conversion, which aims at character conversion based on eigenvoice GMM (EV-GMM) approach. Using an eigenvoice space built from 273 speakers and speech samples of three different characters created by a single skilled voice actor/actress, the conversion can generate the voices of the three characters from an arbitrary speaker, while keeping the speaker identity. Listening tests were carried out by presenting two kinds of synthetic voices; before and after the character conversion. The results showed that listeners, both native and non-native speakers, can perceive well the character voice difference as what was intended by experimenters. It was also shown that this difference was perceived well even when F0 difference between the two was very small, which indicates better performance of our method in character conversion compared to the general F0-based conversion. Further, acoustic comparison between different characters in two cases of the voice actor and the proposed method was made. Results showed that the proposed method can realize acoustically valid modification between different characters.", "subitem_description_type": "Abstract"}]}, "item_7_full_name_3": {"attribute_name": "著者別名", "attribute_value_mlt": [{"nameIdentifiers": [{"nameIdentifier": "9378", "nameIdentifierScheme": "WEKO"}], "names": [{"name": "ポンキッティパン, ティーラポン"}]}]}, "item_7_select_21": {"attribute_name": "学位", "attribute_value_mlt": [{"subitem_select_item": "master"}]}, "item_7_subject_13": {"attribute_name": "日本十進分類法", "attribute_value_mlt": [{"subitem_subject": "007", "subitem_subject_scheme": "NDC"}]}, "item_7_text_24": {"attribute_name": "研究科・専攻", "attribute_value_mlt": [{"subitem_text_value": "情報理工学系研究科電子情報学専攻"}]}, "item_7_text_36": {"attribute_name": "資源タイプ", "attribute_value_mlt": [{"subitem_text_value": "Thesis"}]}, "item_7_text_4": {"attribute_name": "著者所属", "attribute_value_mlt": [{"subitem_text_value": "東京大学大学院情報理工学系研究科電子情報学専攻"}, {"subitem_text_value": "Department of Information and Communication Engineering, Graduate School of Information Science and Technology, The University of Tokyo"}]}, "item_creator": {"attribute_name": "著者", "attribute_type": "creator", "attribute_value_mlt": [{"creatorNames": [{"creatorName": "Pongkittiphan, Teeraphon"}], "nameIdentifiers": [{"nameIdentifier": "9377", "nameIdentifierScheme": "WEKO"}]}]}, "item_files": {"attribute_name": "ファイル情報", "attribute_type": "file", "attribute_value_mlt": [{"accessrole": "open_date", "date": [{"dateType": "Available", "dateValue": "2017-06-01"}], "displaytype": "detail", "download_preview_message": "", "file_order": 0, "filename": "48106452.pdf", "filesize": [{"value": "2.0 MB"}], "format": "application/pdf", "future_date_message": "", "is_thumbnail": false, "licensetype": "license_free", "mimetype": "application/pdf", "size": 2000000.0, "url": {"label": "48106452.pdf", "url": "https://repository.dl.itc.u-tokyo.ac.jp/record/4067/files/48106452.pdf"}, "version_id": "62f2d806-6efb-4ab9-8ebb-1e8db4a948ab"}]}, "item_language": {"attribute_name": "言語", "attribute_value_mlt": [{"subitem_language": "eng"}]}, "item_resource_type": {"attribute_name": "資源タイプ", "attribute_value_mlt": [{"resourcetype": "thesis", "resourceuri": "http://purl.org/coar/resource_type/c_46ec"}]}, "item_title": "Eigenvoice-based character conversion and its evaluations", "item_titles": {"attribute_name": "タイトル", "attribute_value_mlt": [{"subitem_title": "Eigenvoice-based character conversion and its evaluations"}]}, "item_type_id": "7", "owner": "1", "path": ["234", "262"], "permalink_uri": "http://hdl.handle.net/2261/52455", "pubdate": {"attribute_name": "公開日", "attribute_value": "2012-10-30"}, "publish_date": "2012-10-30", "publish_status": "0", "recid": "4067", "relation": {}, "relation_version_is_last": true, "title": ["Eigenvoice-based character conversion and its evaluations"], "weko_shared_id": null}
Eigenvoice-based character conversion and its evaluations
http://hdl.handle.net/2261/52455
http://hdl.handle.net/2261/5245522178ea7-6683-4ac5-a6c5-9c72fdba83f4
名前 / ファイル | ライセンス | アクション |
---|---|---|
48106452.pdf (2.0 MB)
|
|
Item type | 学位論文 / Thesis or Dissertation(1) | |||||
---|---|---|---|---|---|---|
公開日 | 2012-10-30 | |||||
タイトル | ||||||
タイトル | Eigenvoice-based character conversion and its evaluations | |||||
言語 | ||||||
言語 | eng | |||||
資源タイプ | ||||||
資源 | http://purl.org/coar/resource_type/c_46ec | |||||
タイプ | thesis | |||||
その他のタイトル | ||||||
その他のタイトル | Eigenvoiceに基づくキャラクター変換とその評価 | |||||
著者 |
Pongkittiphan, Teeraphon
× Pongkittiphan, Teeraphon |
|||||
著者別名 | ||||||
識別子 | 9378 | |||||
識別子Scheme | WEKO | |||||
姓名 | ポンキッティパン, ティーラポン | |||||
著者所属 | ||||||
著者所属 | 東京大学大学院情報理工学系研究科電子情報学専攻 | |||||
著者所属 | ||||||
著者所属 | Department of Information and Communication Engineering, Graduate School of Information Science and Technology, The University of Tokyo | |||||
Abstract | ||||||
内容記述タイプ | Abstract | |||||
内容記述 | This thesis describes a new method of voice conversion, which aims at character conversion based on eigenvoice GMM (EV-GMM) approach. Using an eigenvoice space built from 273 speakers and speech samples of three different characters created by a single skilled voice actor/actress, the conversion can generate the voices of the three characters from an arbitrary speaker, while keeping the speaker identity. Listening tests were carried out by presenting two kinds of synthetic voices; before and after the character conversion. The results showed that listeners, both native and non-native speakers, can perceive well the character voice difference as what was intended by experimenters. It was also shown that this difference was perceived well even when F0 difference between the two was very small, which indicates better performance of our method in character conversion compared to the general F0-based conversion. Further, acoustic comparison between different characters in two cases of the voice actor and the proposed method was made. Results showed that the proposed method can realize acoustically valid modification between different characters. | |||||
書誌情報 | 発行日 2012-09-27 | |||||
日本十進分類法 | ||||||
主題 | 007 | |||||
主題Scheme | NDC | |||||
学位名 | ||||||
学位名 | 修士(情報理工学) | |||||
学位 | ||||||
値 | master | |||||
研究科・専攻 | ||||||
情報理工学系研究科電子情報学専攻 | ||||||
学位授与年月日 | ||||||
学位授与年月日 | 2012-09-27 |