<?xml version="1.0" encoding="UTF-8"?>
<!-- generator="FeedCreator 1.8" -->
<?xml-stylesheet href="http://abelab-main/lib/exe/css.php?s=feed" type="text/css"?>
<rdf:RDF
    xmlns="http://purl.org/rss/1.0/"
    xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
    xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
    xmlns:dc="http://purl.org/dc/elements/1.1/">
    <channel rdf:about="http://abelab-main/feed.php">
        <title>ABELAB - publication:2021</title>
        <description>Human Centric Information Processing Lab.</description>
        <link>http://abelab-main/</link>
        <image rdf:resource="http://abelab-main/_media/wiki/dokuwiki.svg" />
       <dc:date>2026-04-17T15:45:39+00:00</dc:date>
        <items>
            <rdf:Seq>
                <rdf:li rdf:resource="http://abelab-main/publication/2021/ibnu_i_astesj2021?rev=1652926192&amp;do=diff"/>
                <rdf:li rdf:resource="http://abelab-main/publication/2021/k_inoue_specom2021?rev=1652925342&amp;do=diff"/>
                <rdf:li rdf:resource="http://abelab-main/publication/2021/n_kakegawa_interspeech2021?rev=1652925559&amp;do=diff"/>
            </rdf:Seq>
        </items>
    </channel>
    <image rdf:about="http://abelab-main/_media/wiki/dokuwiki.svg">
        <title>ABELAB</title>
        <link>http://abelab-main/</link>
        <url>http://abelab-main/_media/wiki/dokuwiki.svg</url>
    </image>
    <item rdf:about="http://abelab-main/publication/2021/ibnu_i_astesj2021?rev=1652926192&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2022-05-19T02:09:52+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>Acoustic Scene Classifier Based on Gaussian Mixture Model in the Concept Drift Situation</title>
        <link>http://abelab-main/publication/2021/ibnu_i_astesj2021?rev=1652926192&amp;do=diff</link>
        <description>concept_drift environment_sound_processing p:ibnu_daqiqil_id p:sunao_hara p:masanobu_abe

Acoustic Scene Classifier Based on Gaussian Mixture Model in the Concept Drift Situation

	* , , 
	*  Advances in Science, Technology and Engineering Systems Journal, vol. 6, no. 5, pp. 167–176, Sept. 2021.
	* doi: 10.25046/aj060519

Abstract

The data distribution used in model training is assumed to be similar with that when the model is applied. However, in some applications, data distributions may chang…</description>
    </item>
    <item rdf:about="http://abelab-main/publication/2021/k_inoue_specom2021?rev=1652925342&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2022-05-19T01:55:42+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>Model architectures to extrapolate emotional expressions in DNN-based text-to-speech</title>
        <link>http://abelab-main/publication/2021/k_inoue_specom2021?rev=1652925342&amp;do=diff</link>
        <description>sound speech_synthesis p:katsuki_inoue p:sunao_hara p:masanobu_abe

Model architectures to extrapolate emotional expressions in DNN-based text-to-speech

	* , , , Nobukatsu Hojo, Yusuke Ijima
	* Speech Communication, vol.126, pp.35–43, Feb. 2021. (Available online 24 November 2020)

Abstract

This paper proposes architectures that facilitate the extrapolation of emotional expressions in deep neural network (DNN)-based text-to-speech (TTS). In this study, the meaning of “extrapolate emotional exp…</description>
    </item>
    <item rdf:about="http://abelab-main/publication/2021/n_kakegawa_interspeech2021?rev=1652925559&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2022-05-19T01:59:19+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>Phonetic and Prosodic Information Estimation Using Neural Machine Translation for Genuine Japanese End-to-End Text-to-Speech</title>
        <link>http://abelab-main/publication/2021/n_kakegawa_interspeech2021?rev=1652925559&amp;do=diff</link>
        <description>sound speech_synthesis p:naoto_kakegawa p:sunao_hara p:masanobu_abe

Phonetic and Prosodic Information Estimation Using Neural Machine Translation for Genuine Japanese End-to-End Text-to-Speech

	* , , , Yusuke Ijima
	* Proceedings of Interspeech 2022, pp.126--130, Online/Virtual Conference (Brno, Czech Republic), Sept. 2021. (Oral/ONLINE PRESENTATION)</description>
    </item>
</rdf:RDF>
