Committee |
Date Time |
Place |
Paper Title / Authors |
Abstract |
Paper # |
HI |
2024-03-19 14:10 |
Online |
|
Prediction and reproduction of the sensation of food penetration based on motion and audio information Shuma Shibasaki, Katsunori Okajima (YNU) |
The haptic parameters of a haptic feedback device (Phantom Premium 1.0) were used to reproduce the sensation of penetrat... [more] |
HI2024-24 pp.45-47 |
IEICE-ITS, IEICE-IE, ME, AIT, MMS [detail] |
2024-02-20 11:35 |
Hokkaido |
Hokkaido Univ. |
[Special Talk]
Prediction of Event Locations from Urgent Call Using Speech Recognition and Generative AI Masaki Yoshida, Keisuke Maeda, Ren Togo, Takahiro Ogawa, Miki Haseyama (Hokkaido University) |
Operators handling road-related urgent calls are required to pinpoint the event location from information verbally commu... [more] |
MMS2024-26 ME2024-42 AIT2024-26 pp.128-131 |
IEICE-EID, IDY, IEE-EDD, SID-JC, IEIJ-SSL [detail] |
2024-01-26 09:50 |
Kyoto |
(Primary: On-site, Secondary: Online) |
[Poster Presentation]
Analysis of expiratory flow velocity on artificial vocal fold vibration Yuta Nagai, Fumiya Kondo, Atsushi Nakamura (Shizuoka Univ) |
Synthesized speech has undergone significant changes in recent years. However, it has not yet been able to reproduce hum... [more] |
IDY2024-5 pp.17-20 |
IEICE-SIS, BCT |
2023-10-13 11:05 |
Yamaguchi |
HISTORIA UBE (Primary: On-site, Secondary: Online) |
Proposal of the Fractal Dimensional Filter and its Application to Detect the Voices of the Schlegel's Green Tree Frogs Hideo Shibayama (Shibaura Institute of Technology), Yoshiaki Makabe (Kanagawa Institute of Technology), Kenji Muto (Shibaura Institute of Technology), Tomoaki Kimura (Kanagawa Institute of Technology) |
We propose a fractal dimensional filter to estimate time-position of the target sound from time series data. The filter ... [more] |
|
AIT, IIEEJ, AS, CG-ARTS |
2023-03-06 10:25 |
Tokyo |
Tokyo Polytechnic Univ. (Nakano) (Primary: On-site, Secondary: Online) |
Automating NPC Reactions Using Acoustic Feature Recognition Technology in Games Maria Kuroda, Yoshihisa Kanematsu, Suguru Matsuyoshi, Hirokazu Yasuhara, Koji Mikami (TUT) |
Speech recognition games are progressing day by day, mainly through the improvement of natural language processing techn... [more] |
AIT2023-99 pp.227-230 |
MMS, ME, AIT, IEICE-IE, IEICE-ITS [detail] |
2023-02-21 14:45 |
Hokkaido |
Hokkaido Univ. |
A Note on Improvement of Binauralization Performance Based on Multi-view Learning on 360° Videos Masaki Yoshida, Ren Togo, Takahiro Ogawa, Miki Haseyama (Hokkaido Univ.) |
In this paper, we propose a binaural audio generation method based on multi-view learning using 360◦ videos. Conventiona... [more] |
MMS2023-13 ME2023-33 AIT2023-13 pp.65-69 |
BCT, IEICE-SIS |
2022-10-13 14:15 |
Aomori |
Hachinohe Institute of Technology (Primary: On-site, Secondary: Online) |
Toward Improving Speech Naturalness Introducing a Capsule Structure for Speech Enhancement Networks Reito Kasuga, Tetsuya Shimamura, Yosuke Sugiura, Nozomiko Yasui (Saitama Univ.) |
Although the field of speech enhancement has been extensively studied around the world, phase tends to be neglected comp... [more] |
|
AIT, IIEEJ, AS, CG-ARTS |
2022-03-08 10:30 |
Online |
Online |
Acoustic analysis of urgency using noise-vocoded speech Ryosuke Sakamoto, Yasunari Obuchi (Tokyo Univ. Tech.) |
In the wake of the Great East Japan Earthquake, there has been a growing interest in speech that appropriately conveys d... [more] |
AIT2022-106 pp.265-266 |
AIT, IIEEJ, AS, CG-ARTS |
2022-03-08 10:30 |
Online |
Online |
A VR otome game that you can feel the character's body temperature using peltier devices Ayano Yamanaka, Ryu Nakagawa (Nagoya City Univ.) |
This work is a VR otome game that you can feel the body temperature of the target character with a self-made warm tactil... [more] |
AIT2022-115 pp.291-294 |
AIT, IIEEJ, AS, CG-ARTS |
2022-03-08 10:30 |
Online |
Online |
Movie generation based on higher-order features of music Takamasa Kobori, Yasunari Obuchi (Tokyo Univ. Tech.) |
When posting their music on the Internet, professional musicians create their own videos or order videos, but this is no... [more] |
AIT2022-118 pp.299-300 |
HI, IEICE-HIP, ASJ-H, VRPSY [detail] |
2022-02-28 09:45 |
Online |
on line |
Estimation of speech emotional intensity model using impression ratings Megumi Kawase, Minoru Nakayama (Tokyo Tech) |
We used deep learning to estimate emotional intensity from speech. In our previous study, we considered emotional intens... [more] |
|
IIEEJ, AIT |
2021-10-28 11:30 |
Osaka |
(Primary: On-site, Secondary: Online) |
Speech Organ Figure Models Development using a 3D printer Makoto J. Hirayama (O. I. T.) |
Figure models of speech organs are being developed for research and education of speech production. A 3D printer is use... [more] |
AIT2021-152 pp.47-50 |
3DMT |
2021-07-20 15:20 |
  |
|
[Invited Talk]
The concept of a small sound reproduce system for 3D audio and its evaluation method Manabu Okamoto (Sojo Univ.) |
In recent years, it has become possible to increase the number of channels of audio equipment, and large-scale equipment... [more] |
3DIT2021-20 pp.27-31 |
AIT, IIEEJ, AS, CG-ARTS |
2021-03-08 11:15 |
Online |
Online |
Proposal of a pinyin vocal practice app in Chinese preschool education using voice recognition technology Xiaohan Sun, Kunio Kondo, Kazuo Sasaki (TUT) |
In China's preschool education, due to children's childhood dialect, pronunciation habits and other problems, can not cu... [more] |
AIT2021-57 pp.105-107 |
AIT, IIEEJ, AS, CG-ARTS |
2021-03-08 15:00 |
Online |
Online |
Proposal of speech education app for hearing impaired child using image recognition and voice recognition Su Yang, Kunio Kondo, Kazuo Sasaki (TUT) |
In hearing-impaired education, speech education from spoken, is important along with sign language education. With the ... [more] |
AIT2021-102 pp.245-248 |
AIT, IIEEJ, AS, CG-ARTS |
2021-03-08 15:00 |
Online |
Online |
Non-lyric voice detection method for singing data and lyrics alignment Kanade Saito, Keiko Ochi, Yasunari Obuchi (Tokyo Univ. Tech) |
When aligning the singing data and lyrics, if there is a non-lyric voice in the given singing data, there is a problem o... [more] |
AIT2021-128 pp.327-328 |
BCT, IEEE-BT |
2020-07-31 11:55 |
Hokkaido |
Hokkaido |
Development of automatically generated subtitle service utilizing speech recognition Kenji Sugihara, Masanori Ohsaki (TV TOKYO), Keisuke Handa (TV TOKYO Communications) |
We have developed a service that displays subtitles generated automatically by speech recognition on the out screen. [more] |
BCT2020-52 pp.13-15 |
AIT, IIEEJ, AS, CG-ARTS |
2020-03-13 14:20 |
Tokyo |
Tokyo University of Technology (Cancelled) |
The research of photograph automatically display the voices of animals Takeshi Saito, Kazuo Sasaki (TUT) |
This research is aiming to develop the system to recognize and analyze the voice of animals. And specially for the purpo... [more] |
AIT2020-137 pp.293-294 |
HI, IEICE-IE, IEICE-ITS, MMS, ME, AIT [detail] |
2020-02-27 11:25 |
Hokkaido |
Hokkaido Univ. (Cancelled) |
A speech synthesis from electromyography using CNN Kiyotaka Miyasaka, Yuji sakamoto (Hokkaido Univ.) |
Speech communication can provide various expressions intuitively. However, when communicate with aphonic people or in no... [more] |
MMS2020-28 HI2020-28 ME2020-56 AIT2020-28 pp.163-167 |
AIT, IIEEJ, AS, CG-ARTS |
2019-03-12 11:15 |
Tokyo |
Waseda Univ. |
Web contents for learning "Shigisan-engi-emaki" accessed by voice user interface Ryosuke Matsuda, Eri Yokoyama, Makoto J. Hirayama (OIT) |
A web content for learning “Shigisan-engi-emaki” accessed by voice user interface was made. As a voice user interface (... [more] |
AIT2019-114 pp.225-226 |