Committee |
Date Time |
Place |
Paper Title / Authors |
Abstract |
Paper # |
AIT, IIEEJ, AS, CG-ARTS |
2022-03-08 10:30 |
Online |
Online |
Acoustic analysis of urgency using noise-vocoded speech Ryosuke Sakamoto, Yasunari Obuchi (Tokyo Univ. Tech.) |
In the wake of the Great East Japan Earthquake, there has been a growing interest in speech that appropriately conveys d... [more] |
AIT2022-106 pp.265-266 |
AIT, IIEEJ, AS, CG-ARTS |
2022-03-08 10:30 |
Online |
Online |
Movie generation based on higher-order features of music Takamasa Kobori, Yasunari Obuchi (Tokyo Univ. Tech.) |
When posting their music on the Internet, professional musicians create their own videos or order videos, but this is no... [more] |
AIT2022-118 pp.299-300 |
AIT, IIEEJ, AS, CG-ARTS |
2022-03-08 10:30 |
Online |
Online |
Automatic generation of fluctuation to humanize MIDI sounds Shunya Hayashi, Yasunari Obuchi (Tokyo Univ. Tech.) |
[more] |
AIT2022-121 pp.307-308 |
AIT, IIEEJ, AS, CG-ARTS |
2021-03-08 13:30 |
Online |
Online |
Automatic acoustic scene analysis of touring videos Daiju Sazuka, Keiko Ochi, Yasunari Oobuchi (Tokyo Univ. Tech.) |
This research proposes a system for creating touring videos with a higher sense of realism. The system combines video ta... [more] |
AIT2021-98 pp.231-232 |
AIT, IIEEJ, AS, CG-ARTS |
2021-03-08 15:00 |
Online |
Online |
Proposal of Factor Analysis Method of Emotion Evocation by Music Listening Yuki Notomi, Keiko Ochi, Yasunari Obuchi (Tokyo Univ. Tech.) |
When we listen to music, we often feel chills and thrills. These bodily functions caused by the music are defined as "em... [more] |
AIT2021-127 pp.323-326 |
AIT, IIEEJ, AS, CG-ARTS |
2021-03-08 15:00 |
Online |
Online |
Non-lyric voice detection method for singing data and lyrics alignment Kanade Saito, Keiko Ochi, Yasunari Obuchi (Tokyo Univ. Tech) |
When aligning the singing data and lyrics, if there is a non-lyric voice in the given singing data, there is a problem o... [more] |
AIT2021-128 pp.327-328 |
AIT, IIEEJ, AS, CG-ARTS |
2021-03-08 15:00 |
Online |
Online |
Effect of background noise on stereophonic headphone playback Sayuki Komatsu, Keiko Ochi, Yasunari Obuchi (Tokyo Univ. Tech.) |
In order to investigate the realism of the binaural sound and monaural sound flowing from the headphones, an experiment ... [more] |
AIT2021-129 pp.329-332 |
AIT, IIEEJ, AS, CG-ARTS |
2021-03-08 15:00 |
Online |
Online |
Automatic Recommendation System of Music Genres Suitable for the Voice Type Noriaki Tsuru, Keiko Ochi, Yasunari Obuchi (Tokyo Univ. Tech.) |
[more] |
AIT2021-130 pp.333-336 |
AIT, IIEEJ, AS, CG-ARTS |
2021-03-08 15:00 |
Online |
Online |
Comparative Listening Experiments between Guitar Amplifier and Simulator Keisuke Yoshimura, Keiko Ochi, Yasunari Obuchi (Tokyo Univ. Tech.) |
We conducted comparative listening test of music impressions between guitar amplifiers and a digital amplifier simulator... [more] |
AIT2021-131 pp.337-340 |
AIT, IIEEJ, AS, CG-ARTS |
2020-03-13 11:20 |
Tokyo |
Tokyo University of Technology (Cancelled) |
Sound Source Selection for Synthesizer Using Affective Expression Naoya Kinoshita, Keiko Ochi, Yasunari Obuchi (Tokyo Univ. Tech.) |
It is required to understand the theories of timbre making to use a synthesizer. In this paper, we propose a system that... [more] |
AIT2020-101 pp.175-176 |
AIT, IIEEJ, AS, CG-ARTS |
2019-03-12 14:00 |
Tokyo |
Waseda Univ. |
Discrimination of pencil writing sounds using machine learning Hikaru Oishi, Keiko Ochi, Yasunari Obuchi (Tokyo Univ. Tech.) |
[more] |
AIT2019-132 pp.291-294 |
AIT, IIEEJ, AS, CG-ARTS |
2019-03-12 14:00 |
Tokyo |
Waseda Univ. |
Estimation of the cooking progress deep-fried foods using machine learning and sound analysis Yuta Yamamoto, Keiko Ochi, Yasunari Obuchi (Tokyo Univ. Tech) |
[more] |
AIT2019-141 pp.321-324 |
AIT, IIEEJ, AS, CG-ARTS |
2019-03-12 14:00 |
Tokyo |
Waseda Univ. |
Analysis of Radio Speech Using Deep Neural Network Wataru Yokota, Keiko Ochi, Yasunari Obuchi (Tokyo Univ. Tech) |
Radio broadcasting, which uses audio information only, is still popular in the modern society in which television and mo... [more] |
AIT2019-142 pp.325-328 |
AIT, IIEEJ, AS, CG-ARTS |
2019-03-12 14:00 |
Tokyo |
Waseda Univ. |
Interactive Music Player that Visualizes Impression Classes Estimated by Machine Learning Hiroki Matsui, Keiko Ochi, Yasunari Obuchi (Tokyo Univ. Tech.) |
[more] |
AIT2019-143 pp.329-332 |
AIT, IIEEJ, AS, CG-ARTS |
2019-03-12 14:00 |
Tokyo |
Waseda Univ. |
Natural Language Interface for Synthesizer Based on Word2vec
-- What is the sound of Panda? -- Meinicke Lucian, Ochi Keiko, Obuchi Yasunari (Tokyo Univ. Tech.) |
We have developed an interface that can be operated by natural language so that the user with less knowledge on synthesi... [more] |
AIT2019-144 pp.333-336 |
AIT, IIEEJ, AS, CG-ARTS |
2019-03-12 14:00 |
Tokyo |
Waseda Univ. |
Analysis of auditory characteristics of distorted music Yuma Ono, Keiko Ochi, Yasunari Obuchi (Tokyo Univ. Tech.) |
Recent commercial trends for high sound pressure music brought about the modification of music so as to increase the sou... [more] |
AIT2019-147 pp.345-346 |
AIT, IIEEJ, AS, CG-ARTS |
2019-03-12 14:00 |
Tokyo |
Waseda Univ. |
Analysis of Manzai Speech Using Machine Learning Tetsuya Kamijima, Keiko Ochi, Yasunari Obuchi (Tokyo Univ. Tech.) |
Aiming at autommatic manzai speech evaluation, we analyzed manzai speech using machine learning. In our method, as a pre... [more] |
AIT2019-168 pp.413-416 |
|