<?xml version="1.0" encoding="UTF-8"?>
<!-- generator="FeedCreator 1.8" -->
<?xml-stylesheet href="http://abelab-main/lib/exe/css.php?s=feed" type="text/css"?>
<rdf:RDF
    xmlns="http://purl.org/rss/1.0/"
    xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
    xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
    xmlns:dc="http://purl.org/dc/elements/1.1/">
    <channel rdf:about="http://abelab-main/feed.php">
        <title>ABELAB - publication:2017</title>
        <description>Human Centric Information Processing Lab.</description>
        <link>http://abelab-main/</link>
        <image rdf:resource="http://abelab-main/_media/wiki/dokuwiki.svg" />
       <dc:date>2026-04-16T00:39:56+00:00</dc:date>
        <items>
            <rdf:Seq>
                <rdf:li rdf:resource="http://abelab-main/publication/2017/hara_apsipa2017?rev=1590401407&amp;do=diff"/>
                <rdf:li rdf:resource="http://abelab-main/publication/2017/k_inoue_apsipa2017?rev=1581678960&amp;do=diff"/>
                <rdf:li rdf:resource="http://abelab-main/publication/2017/k_tanaka_interspeech2017?rev=1519452216&amp;do=diff"/>
                <rdf:li rdf:resource="http://abelab-main/publication/2017/s_kamada_localrec2017?rev=1611020917&amp;do=diff"/>
                <rdf:li rdf:resource="http://abelab-main/publication/2017/s_kobayashi_ubicomp2017?rev=1519451858&amp;do=diff"/>
            </rdf:Seq>
        </items>
    </channel>
    <image rdf:about="http://abelab-main/_media/wiki/dokuwiki.svg">
        <title>ABELAB</title>
        <link>http://abelab-main/</link>
        <url>http://abelab-main/_media/wiki/dokuwiki.svg</url>
    </image>
    <item rdf:about="http://abelab-main/publication/2017/hara_apsipa2017?rev=1590401407&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2020-05-25T10:10:07+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>Sound sensing using smartphones as a crowdsourcing approach</title>
        <link>http://abelab-main/publication/2017/hara_apsipa2017?rev=1590401407&amp;do=diff</link>
        <description>sound interface p:sunao_hara p:asako_hatakeyama p:shota_kobayashi p:masanobu_abe

Sound sensing using smartphones as a crowdsourcing approach

	* , Asako Hatakeyama, Shota Kobayashi, and 
	* APSIPA Annual Summit and Conference 2017, FA-02.2, 6 pages, Kuala Lumpur, Malaysia, Dec. 2017. (Oral)

Abstract

Sounds are one of the most valuable information sources for human beings from the viewpoint of understanding the environment around them.
We have been now investigating the method of detecting and…</description>
    </item>
    <item rdf:about="http://abelab-main/publication/2017/k_inoue_apsipa2017?rev=1581678960&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2020-02-14T11:16:00+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>An Investigation to Transplant Emotional Expressions in DNN-based TTS Synthesis</title>
        <link>http://abelab-main/publication/2017/k_inoue_apsipa2017?rev=1581678960&amp;do=diff</link>
        <description>sound speech_synthesis p:katsuki_inoue p:sunao_hara p:masanobu_abe

An Investigation to Transplant Emotional Expressions in DNN-based TTS Synthesis

	* , , , Nobukatsu Hojo, and Yusuke Ijima
	* APSIPA Annual Summit and Conference 2017, TP-P4.9, 6 pages, Kuala Lumpur, Malaysia, Dec. 2017. (Poster)

Abstract

In this paper, we investigate deep neural network(DNN) architectures to transplant emotional expressions to improve the expressiveness of DNN-based text-to-speech (TTS) synthesis.
DNN is expe…</description>
    </item>
    <item rdf:about="http://abelab-main/publication/2017/k_tanaka_interspeech2017?rev=1519452216&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2018-02-24T06:03:36+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>Speaker Dependent Approach for Enhancing a Glossectomy Patient&#039;s Speech via GMM-based Voice Conversion</title>
        <link>http://abelab-main/publication/2017/k_tanaka_interspeech2017?rev=1519452216&amp;do=diff</link>
        <description>sound voice_conversion

Speaker Dependent Approach for Enhancing a Glossectomy Patient&#039;s Speech via GMM-based Voice Conversion

	*  Kei Tanaka, Sunao Hara, Masanobu Abe, Masaaki Sato, Shogo Minagi
	*  Proceedings of Interspeech 2017, pp. 3384–3388, Stockholm, Sweden, Aug. 2017.</description>
    </item>
    <item rdf:about="http://abelab-main/publication/2017/s_kamada_localrec2017?rev=1611020917&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2021-01-19T01:48:37+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>New monitoring scheme for persons with dementia through monitoring-area adaptation according to stage of disease</title>
        <link>http://abelab-main/publication/2017/s_kamada_localrec2017?rev=1611020917&amp;do=diff</link>
        <description>New monitoring scheme for persons with dementia through monitoring-area adaptation according to stage of disease

	*  Shigeki Kamada, Yuji Matsuo,  and , 
	*  ACM SIGSPATIAL Workshop on Recommendations for Location-based Services and Social Networks (LocalRec), California, USA, Nov. 2017.</description>
    </item>
    <item rdf:about="http://abelab-main/publication/2017/s_kobayashi_ubicomp2017?rev=1519451858&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2018-02-24T05:57:38+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>Prediction of subjective assessments for a noise map using deep neural networks</title>
        <link>http://abelab-main/publication/2017/s_kobayashi_ubicomp2017?rev=1519451858&amp;do=diff</link>
        <description>sound lifelog interaface

Prediction of subjective assessments for a noise map using deep neural networks

	*  Shota Kobayashi, Sunao Hara, Masanobu Abe
	*  Proceedings of UbiComp/ISWC 2017 Adjunct, pp.113–116, Hawaii, Sept. 2017.

Abstract

In this paper, we investigate a method of creating noise maps that take account of human senses.
Physical measurements are not enough to design our living environment and we need to know subjective assessments.
To predict subjective assessments from loudness…</description>
    </item>
</rdf:RDF>
