Nakano, Y.I., Nihei, F., Ishii, R. and Higashinaka, R. 2024. Selecting Iconic Gesture Forms Based on Typical Entity Images. Journal of Information Processing. 32, (2024), 196–205. DOI:https://doi.org/10.2197/ipsjjip.32.196.
Ito, A., Nakano, Y.I., Nihei, F., Sakato, T., Ishii, R., Fukayama, A. and Nakamura, T. 2023. Estimating and Visualizing Persuasiveness of Participants in Group Discussions. Journal of Information Processing. 31, (2023), 34–44. DOI:https://doi.org/10.2197/ipsjjip.31.34.
Nihei, F. and Nakano, Y.I. 2019. Exploring Methods for Predicting Important Utterances Contributing to Meeting Summarization. Multimodal Technologies and Interaction. 3, 3 (2019). DOI:https://doi.org/10.3390/mti3030050.
二瓶芙巳雄, 中野有紀子 and 高瀬裕 2019. 言語・非言語情報に基づく議論要約のための重要発言推定. 電子情報通信学会論文誌 A. J102-A, No.2 (2019), pp.35-47.
二瓶芙巳雄, 高瀬裕 and 中野有紀子 2017. 非言語情報に基づくグループ議論における重要発言の推定―グループ議論の要約生成に向けて―. 電子情報通信学会論文誌 A. Vol.J100-A, No.1 (2017), pp.34-44.
Ishii, R., Nihei, F., Ishii, Y., Otsuka, A., Matsuo, K., Nomoto, N., Fukayama, A. and Nakamura, T. 2023. Prediction of Love-Like Scores After Speed Dating Based on Pre-obtainable Personal Characteristic Information. Human-Computer Interaction -- INTERACT 2023 (Cham, 2023), 551–556.
Nihei, F., Ishii, R., Nakano, Y.I., Fukayama, A. and Nakamura, T. 2023. Whether Contribution of Features Differ Between Video-Mediated and In-Person Meetings in Important Utterance Estimation. ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (2023), 1–5.
Nihei, F., Ishii, R., Nakano, Y.I., Nishida, K., Masumura, R., Fukayama, A. and Nakamura, T. 2022. Dialogue Acts Aided Important Utterance Detection Based on Multiparty and Multimodal Information. Interspeech 2022, 23rd Annual Conference of the International Speech Communication Association, Incheon, Korea, 18-22 September 2022 (2022), 1086–1090.
Ito, A., Nakano, Y.I., Nihei, F., Sakato, T., Ishii, R., Fukayama, A. and Nakamura, T. 2022. Predicting Persuasiveness of Participants in Multiparty Conversations. 27th International Conference on Intelligent User Interfaces (New York, NY, USA, 2022), 85–88.
Nihei, F. and Nakano, Y.I. 2021. Web-ECA: A Web-Based ECA Platform. Proceedings of the 2021 International Conference on Multimodal Interaction (New York, NY, USA, 2021), 835–836.
Ueno, R., Nakano, Y.I., Zeng, J. and Nihei, F. 2020. Estimating the Intensity of Facial Expressions Accompanying Feedback Responses in Multiparty Video-Mediated Communication. Proceedings of the 2020 International Conference on Multimodal Interaction (New York, NY, USA, 2020), 144–152.
Nihei, F. and Nakano, Y.I. 2020. A Multimodal Meeting Browser That Implements an Important Utterance Detection Model Based on Multimodal Information. Proceedings of the 25th International Conference on Intelligent User Interfaces Companion (New York, NY, USA, 2020), 59–60.
Nihei, F., Nakano, Y., Higashinaka, R. and Ishii, R. 2019. Determining Iconic Gesture Forms Based on Entity Image Representation. 2019 International Conference on Multimodal Interaction (New York, NY, USA, 2019), 419–425.
Fukasawa, S., Akatsu, H., Taguchi, W., Nihei, F. and Nakano, Y. 2019. Correction to: Presenting Low-Accuracy Information of Emotion Recognition Enhances Human Awareness Performance. Human Interface and the Management of Information. Visual Information and Knowledge Management: Thematic Area, HIMI 2019, Held as Part of the 21st HCI International Conference, HCII 2019, Orlando, FL, USA, July 26–31, 2019, Proceedings, Part I (Berlin, Heidelberg, 2019), C1.
Fukasawa, S., Akatsu, H., Taguchi, W., Nihei, F. and Nakano, Y. 2019. Presenting Low-Accuracy Information of Emotion Recognition Enhances Human Awareness Performance. Human Interface and the Management of Information. Visual Information and Knowledge Management (Cham, 2019), 415–424.
Taguchi, W., Nihei, F., Takase, Y., Nakano, Y.I., Fukasawa, S. and Akatsu, H. 2018. Effects of Face and Voice Deformation on Participant Emotion in Video-mediated Communication. Proceedings of the 20th International Conference on Multimodal Interaction: Adjunct (New York, NY, USA, 2018), 8:1--8:5.
Nihei, F., Nakano, Y.I. and Takase, Y. 2018. Fusing Verbal and Nonverbal Information for Extractive Meeting Summarization. Proceedings of the Group Interaction Frontiers in Technology (New York, NY, USA, 2018), 9:1--9:9.
Tomiyama, K., Nihei, F., Nakano, Y.I. and Takase, Y. 2018. Identifying discourse boundaries in group discussions using a multimodal embedding space. CEUR Workshop Proceedings (2018).
Nihei, F., Nakano, Y.I. and Takase, Y. 2017. Predicting Meeting Extracts in Group Discussions Using Multimodal Convolutional Neural Networks. Proceedings of the 19th ACM International Conference on Multimodal Interaction (New York, NY, USA, 2017), 421–425.
Vrzakova, H., Bednarik, R., Nakano, Y.I. and Nihei, F. 2016. Speakers’ head and gaze dynamics weakly correlate in group conversation. Eye Tracking Research and Applications Symposium (ETRA) (2016).
Nihei, F., Nakano, Y.I. and Takase, Y. 2016. Meeting Extracts for Discussion Summarization Based on Multimodal Nonverbal Information. Proceedings of the 18th ACM International Conference on Multimodal Interaction (New York, NY, USA, 2016), 185–192.
Vrzakova, H., Bednarik, R., Nakano, Y. and Nihei, F. 2014. Influential statements and gaze for persuasion modeling. Proceedings of the 8th Nordic Conference on Human-Computer Interaction Fun, Fast, Foundational - NordiCHI ’14 (New York, New York, USA, 2014), 915–918.
Nihei, F., Nakano, Y.I., Hayashi, Y., Hung, H.-H. and Okada, S. 2014. Predicting Influential Statements in Group Discussions Using Speech and Head Motion Information. Proceedings of the 16th International Conference on Multimodal Interaction (New York, NY, USA, 2014), 136–143.