Curriculum Vitaes

Kazuhiko Shinozawa

  (篠澤 一彦)

Profile Information

Affiliation
Professor, Division of Math, Sciences, and Information Technology in Education, Osaka Kyoiku University
Degree
工学士(慶應義塾大学)
工学修士(慶應義塾大学)
博士(情報学)(京都大学)

Researcher number
80395160
J-GLOBAL ID
202101010709748024
researchmap Member ID
R000029802

Committee Memberships

 1

Awards

 3

Papers

 51
  • Shohei Yamashita, Tomohiro Kurihara, Tetsushi Ikeda, Kazuhiko Shinozawa, Satoshi Iwaki
    Advanced Robotics, 34(20) 1309-1323, Oct, 2020  Peer-reviewed
  • 長谷川孔明, 古谷誠悟, 金井祐輔, 篠沢一彦, 今井倫太
    知能と情報(日本知能情報ファジィ学会誌), 30(4) 634-642, Aug, 2018  Peer-reviewed
  • Yoichi Morales, Atsushi Watanabe, Florent Ferreri, Jani Even, Kazuhiko Shinozawa, Norihiro Hagita
    Robotics and Autonomous Systems, 108 13-26, May, 2018  Peer-reviewed
  • Reo Matsumura, Masahiro Shiomi, Kayako Nakagawa, Kazuhiko Shinozawa, Takahiro Miyashita
    Journal of Robotics and Mechatronics, 28(1) 107-108, 2016  Peer-reviewed
    We developed robovie-mR2, a desktop-sized communication robot, in which we incorporated a “Kawaii” design to create a familiar appearance because it is an important acceptance factor for both researchers and users. It can interact with people using multiple sensors, including a camera and microphones, expressive gestures, and an information display. We believe that robovie-mR2 will become a useful robot platform to advance the research of human-robot interaction. We also give examples of human-robot interaction research works that use robovie-mR2.
  • Shiomi Masahiro, Nakagawa Kayako, Shinozawa Kazuhiko, Matsumura Reo, Ishiguro, Hiroshi. Hagita, Norihiro
    International Journal of Social Robotics, 9(1) 5-15, 2016  Peer-reviewed
    The paper investigated the effects on a person being touched by a robot to motivate her. Human science literature has shown that touches to others facilitate efforts of touched people. On the other hand, in the human--robot interaction research field, past research has failed to focus on the effects of such touches from robots to people. A few studies reported negative impressions from people, even if a touch from a person to a robot left a positive impression. To reveal whether robot touch positively affects humans, we conducted an experiment where a robot requested participants to perform a simple and monotonous task with/without touch interaction between a robot and participants. Our experiment's result showed that both touches from the robot to the participants and touches from the participants to the robot facilitated their efforts.
  • Shota Sasai, Itaru Kitahara, Yoshinari Kameda, Yuichi Ohta, Masayuki Kanbara, Yoichi Morales, Norimichi Ukita, Norihiro Hagita, Tetsushi Ikeda, Kazuhiko Shinozawa
    PROCEEDINGS OF THE 2015 IEEE INTERNATIONAL SYMPOSIUM ON MIXED AND AUGMENTED REALITY WORKSHOPS, 40-46, 2015  Peer-reviewed
    A lot of efforts aim to realize a society where autonomous vehicles become general transportation in industrial, political and academic fields. In order to make autonomous vehicles more familiar with the public, it is necessary to develop not only advanced auto driving control but also comfortable environments for the passengers. This paper proposes our trial to improve comfort of passengers on autonomous vehicles. We developed an experimental vehicle equipping Mixed Reality (MR) display system which aims to reduce anxiety using visual factors. Our proposed system visualizes a road surface that is out of the passenger's field of view by projecting the see-through image on the dashboard. Moreover, it overlays the computer graphics of the wheel trajectories on the displayed image using MR so that the passengers can easily confirm the auto driving control is working correctly. The displayed images enable passengers to comprehend the road condition and the expected vehicle route out of the passenger's field of view. We investigated change of the mental stress by introducing methods for measuring physiological indices, heart rate variability and sweat information.
  • R. Hashimoto, R. Nomura, Masayuki Kanbara, Norimichi Ukita, Tetsushi Ikeda, Yoichi Morales, Atsushi Watanabe, Kazuhiko Shinozawa, Norihiro Hagita
    IEEE International Conference on Vehicular Electronics and Safety, ICVES 2015, Yokohama, Japan, November 5-7, 2015, 158-163, 2015  Peer-reviewed
  • Atsushi Watanabe, Tetsushi Ikeda, Yoichi Morales, Kazuhiko Shinozawa, Takahiro Miyashita, Norihiro Hagita
    2015 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2015, Hamburg, Germany, September 28 - October 2, 2015, 5763-5769, 2015  Peer-reviewed
  • Yoichi Morales, Atsushi Watanabe, Florent Ferreri, Jani Even, Tetsushi Ikeda, Kazuhiko Shinozawa, Takahiro Miyashita, Norihiro Hagita
    IEEE International Conference on Robotics and Automation, ICRA 2015, Seattle, WA, USA, 26-30 May, 2015, 6153-6159, 2015  Peer-reviewed
  • Takamasa Iio, Masahiro Shiomi, Kazuhiko Shinozawa, Katsunori Shimohara, Mitsunori Miki, Nori- hiro Hagita
    International Journal of Social Robotics, 7(2) 253-263, 2015  Peer-reviewed
  • Yoichi Morales, Jani Even, Nagasrikanth Kallakuri, Tetsushi Ikeda, Kazuhiko Shinozawa, Tadahisa Kondo, Norihiro Hagita
    2014 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2197-2202, 2014  Peer-reviewed
    This work introduces a 3D visibility model for comfortable autonomous vehicles. The model computes a visibility index based on the pose of the wheelchair within the environment. We correlate this index with human navigational comfort (discomfort) and we discuss the importance of modeling visibility to improve human riding comfort. The proposed approach models the 3D visual field of view combined with a two-layered environmental representation. The field of view is modeled with information from the pose of the robot, a 3D laser sensor and a two-layered environmental representation composed of a 3D geometric map with traversale area information. Human navigational discomfort was extracted from participants riding the autonomous wheelchair. Results show that there is fair correlation between poor visibility locations (e.g., blind corners) and human discomfort. The approach can model places with identical traversable characteristics but different visibility and it differentiates visibility characteristics according to traveling direction.
  • Masato SAKATA, Zeynep Yucel, Kazuhiko Shinozawa, Norihiro Hagita, Michita Imai, Michiko FURUTANI, Rumiko Matsuoka
    ACM Transaction on Management Information System, 4(3) 13:1-13:21, Oct, 2013  Peer-reviewed
    Common periodical health check-ups include several clinical test items with affordable cost. However, these standard tests do not directly indicate signs of most lifestyle diseases. In order to detect such diseases, a number of additional specific clinical tests are required, which increase the cost of the health check-up. This study aims to enrich our understanding of the common health check-ups and proposes a way to estimate the signs of several lifestyle diseases based on the standard tests in common examinations without performing any additional specific tests. In this manner, we enable a diagnostic process, where the physician may prefer to perform or avoid a costly test according to the estimation carried out through a set of common affordable tests. To that end, the relation between standard and specific test results is modeled with a multivariate kernel density estimate. The condition of the patient regarding a specific test is assessed following a Bayesian framework. Our results indicate that the proposed method achieves an overall estimation accuracy of 84%. In addition, an outstanding estimation accuracy is achieved for a subset of high-cost tests. Moreover, comparison with standard artificial intelligence methods suggests that our algorithm outperforms the conventional methods. Our contributions are as follows: (i) promotion of affordable health check-ups, (ii) high estimation accuracy in certain tests, (iii) generalization capability due to ease of implementation on different platforms and institutions, (iv) flexibility to apply to various tests and potential to improve early detection rates.
  • Masahiro SHIOMI, Kazuhiko SHINOZAWA, Yoshifumi NAKAGAWA, Takahiro MIYASHITA, Toshio SAKAMOTO, Toshimitsu TERAKADO, Hiroshi ISHIGURO, Norihiro Hagita
    International Journal of Social Robotics, 5(2) 251-262, Feb, 2013  Peer-reviewed
  • Koji Kamei, Tetsushi Ikeda, Masahiro Shiomi, Hiroyuki Kidokoro, Akira Utsumi, Kazuhiko Shinozawa, Takahiro Miyashita, Norihiro Hagita
    Annals of Telecommunications(Special Issue on Ubiquitous Networked Robots), 67(7) 329-340, Jun, 2012  Peer-reviewed
  • 城所 宏行, 亀井 剛次, 篠沢 一彦, 宮下 敬宏, 萩田紀博
    信学論 D, 95-D(4) 790-798, Apr, 2012  Peer-reviewed
  • 飯尾尊優, 塩見昌裕, 篠沢一彦, 下原勝憲, 萩田紀博
    情報通信学会論文誌, 53(4) 1251-1268, Apr, 2012  Peer-reviewed
  • Kayako Nakagawa, Masahiro Shiomi, Kazuhiko Shinozawa, Reo Matsumura, Hiroshi Ishiguro, Nori- hiro Hagita
    International Journal of Social Robotics, 5(1) 5-16, Mar, 2012  Peer-reviewed
  • 中川佳弥子, 塩見昌裕, 篠沢一彦, 松村礼央, 石黒浩, 萩田紀博
    信学論 A, 95-A(1) 136-144, Jan, 2012  Peer-reviewed
  • 杉山治, 篠沢一彦, 今井倫太, 萩田紀博
    信学論 A, 95-A(1) 136-144, Jan, 2012  Peer-reviewed
  • Takamasa Iio, Masahiro Shiomi, Kazuhiko Shinozawa, Takaaki Akimoto, Katsunori Shimohara, Norihiro Hagita
    International Journal of Social Robotics, 3(4) 405-414, Sep, 2011  Peer-reviewed
  • Hiroyuki Kidokoro, Koji Kamei, Kazuhiko Shinozawa, Takahiro Miyashita, Norihiro Hagita
    UbiComp 2011: Ubiquitous Computing, 13th International Conference, UbiComp 2011, Beijing, China, September 17-21, 2011, Proceedings, 569-570, 2011  Peer-reviewed
  • Koji Kamei, Tetsushi Ikeda, Hiroyuki Kidokoro, Masayuki Shiomi, Akira Utsumi, Kazuhiko Shinozawa, Takahiro Miyashita, Norihiro Hagita
    Proceedings - 2011 IEEE International Conference on Privacy, Security, Risk and Trust and IEEE International Conference on Social Computing, PASSAT/SocialCom 2011, 235-241, 2011  Peer-reviewed
    Applying the technologies of a network robot system, we incorporate the recommendation methods used in Ecommerce in a retail shop in the real world. We constructed a platform for ubiquitous-networked robots that focuses on a shop environment where communication robots perform customer navigation. The platform estimates customer interests from their pre-purchasing behaviors observed by networked sensors without concrete IDs and controls visible-type communication robots in the environment to perform customer navigation. The system can perform collaborative filtering-based recommendations without excessively intruding upon the privacy of customers. Since observations and recommendations in real environments are intrinsically annoying, the robot scenarios are also important for interacting with customers as well as the system. Two types of navigation scenarios are implemented and investigated in experiments using 80 participants. The results indicate that in the cooperative navigation scenario, the participants who interacted with the communication robots located both outside and inside the shop felt friendlier toward the robots and found it easy to understand what they said. © 2011 IEEE.
  • Takamasa Iio, Masahiro Shiomi, Kazuhiko Shinozawa, Takaaki Akimoto, Katsunori Shimohara, Norihiro Hagita
    PROCEEDINGS OF THE SIXTEENTH INTERNATIONAL SYMPOSIUM ON ARTIFICIAL LIFE AND ROBOTICS (AROB 16TH '11), 585-588, 2011  Peer-reviewed
    This paper reports a new finding of a phenomenon that person's gestures or words are implicitly modified by robot's gestures or words. Previous researches focused on an implicit effect of robot's gestures on person's gestures or an implicit effect of robot's words on person's words, but they did not focused on an implicit effect of robot's gestures on person's words or an implicit effect of robot's words on person's gestures. We supposed that there was such an effect as to arise between different modalities, and we defined it as a cross-modal effect. In order to verify hypotheses about the cross-modal effect, an experiment was conducted, in which a pair of a pointing gesture and a deictic word was focused on. This result showed that participants used a pointing gesture more often when a robot used a deictic word, and participants used a deictic word more often when the robot used a pointing gesture. Therefore, person's pointing gesture was implicitly modified by robot's deictic word, and also person's deictic word was implicitly modified by robot's pointing gesture. The cross-modal effect is expected to be applied to robot's dialog design to elicit comprehensible behavior from a person.
  • 飯尾尊優, 塩見昌裕, 篠沢一彦, 秋本高明, 下原勝憲, 萩田紀博
    ヒューマンインタフェース学会, 13(1) 9-21, Jan, 2011  Peer-reviewed
  • 中川佳弥子, 塩見昌裕, 篠沢一彦, 松村礼央, 石黒浩, 萩田紀博
    ヒューマンインタフェース学会, 13(1) 31-40, Jan, 2011  Peer-reviewed
  • 中川佳弥子, 篠沢一彦, 松村 礼央, 石黒 浩, 萩田紀博
    ヒューマンインタフェース学会, 12(3) 239-248, Aug, 2010  Peer-reviewed
  • 飯尾尊優, 塩見昌裕, 篠沢一彦, 宮下敬宏, 下原勝憲, 秋本高明, 萩田紀博
    情報処理学会論文誌, 51(2) 277-289, Feb, 2010  Peer-reviewed
  • Osamu Sugiyama, Kazuhiko Shinozawa, Takaaki Akimoto, Norihiro Hagita
    SOCIAL ROBOTICS, ICSR 2010, 6414 90-99, 2010  Peer-reviewed
    This paper reports the docking and metaphor effects on persuasion among multi-robot healthcare systems. The goal of our research is to make a robot friend that lives with its users and persuades them to make appropriate healthcare decisions. To realize such a robot friend, we propose a physical approach called docking as well as a contextual approach called metaphor to perform relational inheritance among multi-robot systems. We implemented a multi-robot persuasion system based on the two approaches and verified its effectiveness. The experimental results revealed that users emphasize interpersonal relationships to decide whether to follow the robot's advice when utilizing the metaphor approach, and that users emphasize robot aggressiveness when utilizing docking approach.
  • Koji Kamei, Kazuhiko Shinozawa, Tetsushi Ikeda, Akira Utsumi, Takahiro Miyashita, Norihiro Hagita
    International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction, ICMI-MLMI 2010, 19:1-19:8, 2010  Peer-reviewed
    By applying network robot technologies, recommendation methods from E-Commerce are incorporated in a retail shop in the real world. We constructed an experimental shop environment where communication robots recommend specific items to the customers according to their purchasing behavior as observed by networked sensors. A recommendation scenario is implemented with three robots and investigated through an experiment. The results indicate that the participants stayed longer in front of the shelves when the communication robots tried to interact with them and were influenced to carry out similar purchasing behaviors as those observed earlier. Other results suggest that the probability of customers' zone transition can be used to anticipate their purchasing behavior. © 2010 ACM.
  • Takamasa Iio, Masahiro Shiomi, Kazuhiko Shinozawa, Takaaki Akimoto, Katsunori Shimohara, Norihiro Hagita
    SOCIAL ROBOTICS, ICSR 2010, 6414 372-381, 2010  Peer-reviewed
    Social robots need to recognize the objects indicated by people to work in real environments. This paper presents the entrainment of human pointing gestures during interaction with a robot and investigated what robot gestures are important for such entrainment. We conducted a Wizard-of-Oz experiment where a person and a robot referred to objects and evaluated the entrainment frequency. The frequency was lowest when the robot just used pointing gestures, and the frequency was highest when it used both gazing and pointing gestures. These result suggest that not only robot pointing gestures but also gazing gestures affect entrainment. We conclude that the entrainment of pointing gestures might improve a robot's ability to recognize them.
  • Takamasa Iio, Masahiro Shiomi, Kazuhiko Shinozawa, Takaaki Akimoto, Katsunori Shimohara, Norihiro Hagita
    IEEE/RSJ 2010 INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS 2010), 5294-5301, 2010  Peer-reviewed
    Social robots that provide services to humans in real environments have been developed in recent years. Such a robot should appropriately recognize its users' orders through human-like communications because of user-friendliness. However, their styles of communicating are too diverse to achieve this goal. If the robot could shape their styles, its recognition ability would be improved. An entrainment, which is a phenomenon where human's behavior is synchronized with robot's behavior, can be useful for this shaping. Previous studies have reported the entrainment occurring in the same modality, but they have given little attention to entrainment across different modalities (e. g., speech and gestures). We need to consider this cross-modal effect because human-robot interaction is inherently multi-modal. In this paper, we defined "mutual entrainment" as the entrainment across different modalities and investigated the effect of it through a laboratory experiment. We evaluate how the frequency of human pointing gestures varies with the amount of information in robot speech, and as a result, we found that the gesture frequency increased as the amount of information decreased. The results suggest that smoother human-robot communications can be achieved by shaping human behavior through mutual entrainment.
  • Masahiro Shiomi, Kayako Nakagawa, Reo Matsumura, Kazuhiko Shinozawa, Hiroshi Ishiguro, Norihino Hagita
    IEEE/RSJ 2010 INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS 2010), 3899-3904, 2010  Peer-reviewed
    This paper reports the persuasion effect of a robot's whispering behavior that consists of a whispering gesture and a request made in a small voice. Whispering gestures naturally make close distance and create warmth feelings with subjects, and requests in quiet voices with whispering gestures also create familiar impressions, which are effective factors of persuasion. We believe that such physical behavior as whispering is one persuasion advantage held by real robots over computers. We conducted a between-subjects experiment to investigate the effectiveness of these two factors on persuasion. In the experiment, the robot requests an annoying task of the subjects; writing as many multiplication table equations as possible. As a result, whispering gestures significantly increased the working time and the number of equations. On the other hand, the loudness of the voice in the request had no effect. We believe the results indicate the effectiveness of physical behavior for persuasion in human-robot interaction.
  • Kazuhiko Shinozawa, Norihiro Hagita, Michiko Furutani, Rumiko Matsuoka
    Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 5633 44-50, 2009  Peer-reviewed
    Our periodic health examination often describes whether each examination item in blood and urine takes in the reference range of each examination item and a simple summary report on checks in everyday life and the possibility of suspicious diseases. However, it uses n variable items such as AST(GOT), ALT(GPT) which are less correlated, and often includes expensive tumor markers. Therefore, this paper proposes a data mining method for finding hidden relationships between these items in order to reduce the examination fee and giving a report depending on individuals. Since low correlation coefficients are shown in most pairs of items over all clients, a set of item's values in consecutive health examinations of each client is investigated for data mining. Four groups are formed according to the frequency taking outside the reference range in an item for three consecutive examinations, and average values of the other items included in each group are calculated in all pairs of items. The experiment results for three consecutive health examinations show that a lot of item pairs have positive or negative correlations between different frequencies with an item and the averages with the other item despite the fact that their correlation coefficients are small. The result shows both possible reducting of reducing the examination fee as inexpensive as possible and the possibility of a health-care report reflecting individuals. © 2009 Springer Berlin Heidelberg.
  • Takamasa Iio, Masahiro Shiomi, Kazuhiko Shinozawa, Takahiro Miyashita, Takaaki Akimoto, Norihiro Hagita
    2009 IEEE-RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, 3727-3734, 2009  Peer-reviewed
    A communication robot must recognize a referred-to object to support us in daily life. However, using our wide human vocabulary, we often refer to objects in terms that are incomprehensible to the robot. This paper focuses on lexical entrainment to solve this problem. Lexical entrainment is the phenomenon of people tending to adopt the terms of their interlocutor. While this has been well studied in human-computer interaction, few published papers have approached it in human-robot interaction. To investigate how lexical entrainment occurs in human-robot interaction, we conduct experiments where people instruct the robot to move objects. Our results show that two types of lexical entrainment occur in human-robot interaction. We also discuss the effects of the state of objects on lexical entrainment. Finally, we developed a test bed system for recognizing a referred-to object on the basis of knowledge from our experiments.
  • Kayako Nakagawa, Kazuhiko Shinozawa, Hiroshi Ishiguro, Takaaki Akimoto, Norihiro Hagita
    2009 IEEE-RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, 5003-+, 2009  Peer-reviewed
    In human-robot interaction, robots often fail to h ad humans to intended reactions due to their limited ability to express affective nuances. In this paper, we propose a n otion modification method that combines affective nuances with arbitrary motions of humanoid robots to induce humans to intended reactions by expressing affective states. The method is applicable to various humanoid robots that differ in degrees of freedom or appearances, and the affective nuances are parametrically expressed in a two-dimensional model comprised of valence and arousal. The experimental results showed that the desired affective nuances could be expressed by our method, but it also suggested some limitations. We believe that the method will contribute to interactive systems in which robots can communicate with appropriate expressions in various Contexts.
  • 光永法明, 宮下善太, 篠沢一彦, 宮下敬宏, 石黒浩, 萩田紀博
    日本ロボット学会誌, 26(7) 94-102, Oct, 2008  Peer-reviewed
  • Noriaki Mitsunaga, Zenta Miyashita, Kazuhiko Shinozawa, Takahiro Miyashita, Hiroshi Ishiguro, Norihiro Hagita
    2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, September 22-26, 2008, Acropolis Convention Center, Nice, France, 3336-3343, 2008  Peer-reviewed
  • 伊藤 禎宣, 岩澤 昭一郎, 土川 仁, 篠沢 一彦, 角 康之, 間瀬 健二, 鳥山 朋二, 小暮 潔, 萩田 紀博
    情報処理学会論文誌, 49(1) 83-95, Jan, 2008  Peer-reviewed
  • 吉川 雄一郎, 篠沢 一彦, 石黒 浩, 萩田 紀博, 宮本 孝典
    情報処理学会論文誌, 48(3) 1284-1293, Mar, 2007  Peer-reviewed
  • Kazuhiko Shinozawa, Takahiro Miyashita, Masayuki Kakio, Norihiro Hagita
    HUMANOIDS: 2007 7TH IEEE-RAS INTERNATIONAL CONFERENCE ON HUMANOID ROBOTS, 366-370, 2007  Peer-reviewed
    We investigate bow users specify physical objects in human-humanoid interaction and discuss confirmation ways for humanoid robots. In the future such robots are expected to serve as daily life communication partners in similar ways as family members or friends. We conducted an experiment in which users asked a robot to retrieve books to investigate their methods of specifying location. Preliminary results show that half of a user's specifying method for a book can only be understood with speech recognition. However, combining pointing and perfect speech recognition can identify almost all user demands. However, since humanoid robots need confirmation in actual situations due to imprecise pointing or noise problems, we considered the confirmation behaviors of robots. Experiment results suggest that confirmation that includes both speaking a book's title and pointing provides a user more specification ways and reduces workload.
  • Hiroko Tochigi, Kazuhiko Shinozawa, Norihiro Hagita
    ICMI'07: PROCEEDINGS OF THE NINTH INTERNATIONAL CONFERENCE ON MULTIMODAL INTERFACES, 279-284, 2007  Peer-reviewed
    This paper investigates the effect on user impressions of the body direction of it stuffed doll robot in an animation system. Many systems that combine a computer display with a robot have been developed, and one of their applications is entertainment, for example, an animation system. In these systems, the robot, as a 3D agent, can be more effective than a 2D agent in helping the user enjoy the animation experience by using spatial characteristics, such as body direction, as a means of expression. The direction in which the robot faces, i.e., towards the human or towards the display. is investigated here. User impressions from 25 subjects were examined. The experiment results show that the robot facing the display together with a user is effective for eliciting good feelings from the user, regardless of the user's personality characteristics. Results also suggest that extroverted subjects tend to have a better feeling towards a robot facing the user than introverted ones.
  • Yuichiro Yoshikawa, Kazuhiko Shinozawa, Hiroshi Ishiguro, Norihiro Hagita, Takanori Miyamoto
    2006 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-12, 4564-+, 2006  
    In face-to-face communication, eyes play a central role, for example in directing attention and regulating turn-taking. For this reason, it has been a central topic in several fields of interaction study. Although many psychology findings have encouraged previous work in both human-computer and human-robot interaction studies, so far there have been few explorations on how to move the agent's eye, including when to move it, for communication. Therefore, it is this topic we address in this study. The impression a person forms from an interaction is strongly influenced by the degree to which their partner's gaze direction correlates with their own. In this paper, we propose methods of controlling a robot's gaze responsively to its partner's gaze and confirm the effect of this on the feeling of being looked at, which is considered to be the basis of conveying impressions using gaze in face-to-face interaction experiments. Furthermore, an additional preliminary experiment with an on screen agent shows the possibility of using blinking behaviour as another modality for responding to a partner.
  • Kazuhiko Shinozawa, Futoshi Naya, Junji Yamato, Kiyoshi Kogure
    International Journal of Human-Computer Studies, 62(2) 267-279, Mar, 2005  Peer-reviewed
  • Kazuhiko Shinozawa, Futoshi Naya, Kiyoshi Kogure, Junji Yamato
    2004 IEEE/RSJ International Conference on Intelligent Robots and Systems, Sendai, Japan, September 28 - October 2, 2004, 1908-1913, 2004  Peer-reviewed
  • 納谷 太, 篠沢 一彦, 大和 淳司, 小暮 潔
    信学論 D-II, 85(4) 613-621, Apr, 2002  Peer-reviewed
  • Junji Yamato, Kazuhiko Shinozawa, Futoshi Naya, Kiyoshi Kogure
    Human-Computer Interaction INTERACT '01: IFIP TC13 International Conference on Human-Computer Interaction(INTERACT), 690-691, 2001  
  • 篠沢 一彦, 下原 勝憲, 曽根原 登, 徳永 幸夫
    信学論 D-II,, 82(7) 1190-1198, Jul, 1999  Peer-reviewed
  • 今井 倫太, 篠沢 一彦
    信学論 D-II, 81(11) 2635-2644, Nov, 1998  Peer-reviewed
  • 篠沢 一彦, 藤井 雅晴, 曽根原 登
    信学論D-II, 78(7) 1144-1149, Jul, 1995  Peer-reviewed
  • 篠沢 一彦, 内山 匡, 曽根原 登
    信学論D-II, 76 D-II(7) 1471-1474, Jul, 1993  Peer-reviewed

Misc.

 8
  • 西尾拓真, 宮下敬宏, 宮下敬宏, 篠澤一彦, 篠澤一彦, 篠澤一彦, 萩田紀博, 萩田紀博, 萩田紀博, 安藤英由樹, 安藤英由樹
    日本バーチャルリアリティ学会大会論文集(CD-ROM), 28th, 2023  
  • 大野凪, 大野凪, 宮下敬宏, 宮下敬宏, 篠澤一彦, 篠澤一彦, 篠澤一彦, 萩田紀博, 萩田紀博, 安藤英由樹, 安藤英由樹
    日本バーチャルリアリティ学会大会論文集(CD-ROM), 27th, 2022  
  • Kayako Nakagawa, Masahiro Shiomi, Kazuhiko Shinozawa, Reo Matsumura, Hiroshi Ishiguro, Norihiro Hagita
    PROCEEDINGS OF THE 6TH ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTIONS (HRI 2011), pp.465-472 465-472, 2011  
    This paper presents the effect of a robot's active touch for improving people's motivation. For services in the education and healthcare fields, a robot might be useful for improving the motivation of performing such repetitive and monotonous tasks as exercising or taking medicine. Previous research demonstrated with a robot the effect of user touch on improving its impressions, but they did not clarify whether a robot's touch, especially an active touch, has enough influence on people's motive. We implemented an active touch behavior and experimentally investigated its effect on motivation. In the experiment, a robot requested participants to perform a monotonous task with a robot's active touch, a passive touch, or no touch. The result of experiment showed that an active touch by a robot increased the number of working actions and the amount of working time for the task. This suggests that a robot's active touch can support people to improve their motivation. We believe that a robot's active touch behavior is useful for such robot's services as education and healthcare.
  • NISHIO Shuichi, KANDA Takayuki, MIYASHITA Takahiro, SHINOZAWA Kazuhiko, HAGITA Norihiro, YAMAZAKI Tatsuya
    Journal of the Robotics Society of Japan, 26(5) 427-430, Jul 15, 2008  
  • 飯尾尊優, 篠沢一彦, 塩見昌裕, 宮下敬宏, 秋本高明, 萩田紀博
    日本ロボット学会学術講演会予稿集(CD-ROM), 26th, 2008  

Presentations

 1

Professional Memberships

 2

Research Projects

 3