<?xml version="1.0" encoding="UTF-8"?>
<!-- generator="FeedCreator 1.8" -->
<?xml-stylesheet href="http://abelab-main/lib/exe/css.php?s=feed" type="text/css"?>
<rdf:RDF
    xmlns="http://purl.org/rss/1.0/"
    xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
    xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
    xmlns:dc="http://purl.org/dc/elements/1.1/">
    <channel rdf:about="http://abelab-main/feed.php">
        <title>ABELAB - publication:2020</title>
        <description>Human Centric Information Processing Lab.</description>
        <link>http://abelab-main/</link>
        <image rdf:resource="http://abelab-main/_media/wiki/dokuwiki.svg" />
       <dc:date>2026-05-05T06:10:00+00:00</dc:date>
        <items>
            <rdf:Seq>
                <rdf:li rdf:resource="http://abelab-main/publication/2020/ibunu_i_tencon2020?rev=1611020554&amp;do=diff"/>
                <rdf:li rdf:resource="http://abelab-main/publication/2020/k_inoue_apsipa2020?rev=1611020291&amp;do=diff"/>
                <rdf:li rdf:resource="http://abelab-main/publication/2020/k_inoue_icassp2020a?rev=1590400064&amp;do=diff"/>
                <rdf:li rdf:resource="http://abelab-main/publication/2020/k_inoue_icassp2020b?rev=1590401337&amp;do=diff"/>
                <rdf:li rdf:resource="http://abelab-main/publication/2020/k_matsumoto_interspeech2020?rev=1611020058&amp;do=diff"/>
            </rdf:Seq>
        </items>
    </channel>
    <image rdf:about="http://abelab-main/_media/wiki/dokuwiki.svg">
        <title>ABELAB</title>
        <link>http://abelab-main/</link>
        <url>http://abelab-main/_media/wiki/dokuwiki.svg</url>
    </image>
    <item rdf:about="http://abelab-main/publication/2020/ibunu_i_tencon2020?rev=1611020554&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2021-01-19T01:42:34+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>Concept Drift Adaptation for Acoustic Scene Classifier Based on Gaussian Mixture Model</title>
        <link>http://abelab-main/publication/2020/ibunu_i_tencon2020?rev=1611020554&amp;do=diff</link>
        <description>sound acoustic_scene_classification p:ibunu_daqiqil_id p:sunao_hara p:masanobu_abe

Concept Drift Adaptation for Acoustic Scene Classifier Based on Gaussian Mixture Model

	* , , 
	* The 2020 IEEE Region 10 Conference (IEEE-TENCON 2020), pp.450–455, Online/Virtual Conference (Osaka, Japan), Nov. 2020.

Abstract

In non-stationary environments, data might change over time, leading to variations in the underlying data distributions. This phenomenon is called concept drift and it negatively impacts…</description>
    </item>
    <item rdf:about="http://abelab-main/publication/2020/k_inoue_apsipa2020?rev=1611020291&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2021-01-19T01:38:11+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>Module Comparison of Transformer-TTS for Speaker Adaptation based on Fine-tuning</title>
        <link>http://abelab-main/publication/2020/k_inoue_apsipa2020?rev=1611020291&amp;do=diff</link>
        <description>sound speech_synthesis p:katsuki_inoue p:sunao_hara p:masanobu_abe

Module Comparison of Transformer-TTS for Speaker Adaptation based on Fine-tuning

	* , , 
	* Proceedings of APSIPA Annual Summit and Conference 2020, pp.826–830, Online/Virtual Conference (Auckland, New Zealand), Dec. 2020. (Oral/ONLINE PRESENTATION)</description>
    </item>
    <item rdf:about="http://abelab-main/publication/2020/k_inoue_icassp2020a?rev=1590400064&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2020-05-25T09:47:44+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>Semi-Supervised Speaker Adaptation for End-to-End Speech Synthesis with Pretrained Models</title>
        <link>http://abelab-main/publication/2020/k_inoue_icassp2020a?rev=1590400064&amp;do=diff</link>
        <description>sound speech_synthesis p:katsuki_inoue p:sunao_hara p:masanobu_abe

Semi-Supervised Speaker Adaptation for End-to-End Speech Synthesis with Pretrained Models

	* , , , Tomoki Hayashi, Ryuichi Yamamoto, Shinji Watanabe
	* ICASSP 2020, pp. 7634--7638, Barcelona, Spain, May 2020. (Oral/ONLINE PRESENTATION)

Abstract

Recently, end-to-end text-to-speech (TTS) models have achieved a remarkable performance, however, requiring a large amount of paired text and speech data for training. On the other han…</description>
    </item>
    <item rdf:about="http://abelab-main/publication/2020/k_inoue_icassp2020b?rev=1590401337&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2020-05-25T10:08:57+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>Espnet-TTS: Unified, Reproducible, and Integratable Open Source End-to-End Text-to-Speech Toolkit</title>
        <link>http://abelab-main/publication/2020/k_inoue_icassp2020b?rev=1590401337&amp;do=diff</link>
        <description>sound speech_synthesis p:katsuki_inoue

Espnet-TTS: Unified, Reproducible, and Integratable Open Source End-to-End Text-to-Speech Toolkit

	* Tomoki Hayashi, Ryuichi Yamamoto, , Takenori Yoshimura, Shinji Watanabe, Tomoki Toda, Kazuya Takeda, Yu Zhang, Xu Tan
	* ICASSP 2020, pp. 7654--7658, Barcelona, Spain, May 2020. (Oral/ONLINE PRESENTATION)</description>
    </item>
    <item rdf:about="http://abelab-main/publication/2020/k_matsumoto_interspeech2020?rev=1611020058&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2021-01-19T01:34:18+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>Controlling the Strength of Emotions in Speech-Like Emotional Sound Generated by WaveNet</title>
        <link>http://abelab-main/publication/2020/k_matsumoto_interspeech2020?rev=1611020058&amp;do=diff</link>
        <description>sound speech_synthesis p:kento_matsumoto p:sunao_hara p:masanobu_abe

Controlling the Strength of Emotions in Speech-Like Emotional Sound Generated by WaveNet

	* , , 
	* Proceedings of Interspeech 2020, pp.3421--3425, Online/Virtual Conference (Shanghai, China), Oct. 2020. (Oral/ONLINE PRESENTATION)

Abstract

This paper proposes a method to enhance the controllability of a Speech-like Emotional Sound (SES). In our previous study, we proposed an algorithm to generate SES by employing WaveNet as…</description>
    </item>
</rdf:RDF>
