研究者業績

篠澤 一彦

シノザワ カズヒコ  (Kazuhiko Shinozawa)

基本情報

所属
大阪教育大学 理数情報教育系 教授
学位
工学士(慶應義塾大学)
工学修士(慶應義塾大学)
博士(情報学)(京都大学)

研究者番号
80395160
J-GLOBAL ID
202101010709748024
researchmap会員ID
R000029802

受賞

 3

論文

 53
  • Shohei Yamashita, Tomohiro Kurihara, Tetsushi Ikeda, Kazuhiko Shinozawa, Satoshi Iwaki
    Advanced Robotics 34(20) 1309-1323 2020年10月  査読有り
  • 長谷川孔明, 古谷誠悟, 金井祐輔, 篠沢一彦, 今井倫太
    知能と情報(日本知能情報ファジィ学会誌) 30(4) 634-642 2018年8月  査読有り
  • Yoichi Morales, Atsushi Watanabe, Florent Ferreri, Jani Even, Kazuhiko Shinozawa, Norihiro Hagita
    Robotics and Autonomous Systems 108 13-26 2018年5月  査読有り
  • Reo Matsumura, Masahiro Shiomi, Kayako Nakagawa, Kazuhiko Shinozawa, Takahiro Miyashita
    Journal of Robotics and Mechatronics 28(1) 107-108 2016年  査読有り
    We developed robovie-mR2, a desktop-sized communication robot, in which we incorporated a “Kawaii” design to create a familiar appearance because it is an important acceptance factor for both researchers and users. It can interact with people using multiple sensors, including a camera and microphones, expressive gestures, and an information display. We believe that robovie-mR2 will become a useful robot platform to advance the research of human-robot interaction. We also give examples of human-robot interaction research works that use robovie-mR2.
  • Shiomi Masahiro, Nakagawa Kayako, Shinozawa Kazuhiko, Matsumura Reo, Ishiguro, Hiroshi. Hagita, Norihiro
    International Journal of Social Robotics 9(1) 5-15 2016年  査読有り
    The paper investigated the effects on a person being touched by a robot to motivate her. Human science literature has shown that touches to others facilitate efforts of touched people. On the other hand, in the human--robot interaction research field, past research has failed to focus on the effects of such touches from robots to people. A few studies reported negative impressions from people, even if a touch from a person to a robot left a positive impression. To reveal whether robot touch positively affects humans, we conducted an experiment where a robot requested participants to perform a simple and monotonous task with/without touch interaction between a robot and participants. Our experiment's result showed that both touches from the robot to the participants and touches from the participants to the robot facilitated their efforts.
  • Shota Sasai, Itaru Kitahara, Yoshinari Kameda, Yuichi Ohta, Masayuki Kanbara, Yoichi Morales, Norimichi Ukita, Norihiro Hagita, Tetsushi Ikeda, Kazuhiko Shinozawa
    PROCEEDINGS OF THE 2015 IEEE INTERNATIONAL SYMPOSIUM ON MIXED AND AUGMENTED REALITY WORKSHOPS 40-46 2015年  査読有り
    A lot of efforts aim to realize a society where autonomous vehicles become general transportation in industrial, political and academic fields. In order to make autonomous vehicles more familiar with the public, it is necessary to develop not only advanced auto driving control but also comfortable environments for the passengers. This paper proposes our trial to improve comfort of passengers on autonomous vehicles. We developed an experimental vehicle equipping Mixed Reality (MR) display system which aims to reduce anxiety using visual factors. Our proposed system visualizes a road surface that is out of the passenger's field of view by projecting the see-through image on the dashboard. Moreover, it overlays the computer graphics of the wheel trajectories on the displayed image using MR so that the passengers can easily confirm the auto driving control is working correctly. The displayed images enable passengers to comprehend the road condition and the expected vehicle route out of the passenger's field of view. We investigated change of the mental stress by introducing methods for measuring physiological indices, heart rate variability and sweat information.
  • R. Hashimoto, R. Nomura, Masayuki Kanbara, Norimichi Ukita, Tetsushi Ikeda, Yoichi Morales, Atsushi Watanabe, Kazuhiko Shinozawa, Norihiro Hagita
    IEEE International Conference on Vehicular Electronics and Safety, ICVES 2015, Yokohama, Japan, November 5-7, 2015 158-163 2015年  査読有り
  • Atsushi Watanabe, Tetsushi Ikeda, Yoichi Morales, Kazuhiko Shinozawa, Takahiro Miyashita, Norihiro Hagita
    2015 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2015, Hamburg, Germany, September 28 - October 2, 2015 5763-5769 2015年  査読有り
  • Yoichi Morales, Atsushi Watanabe, Florent Ferreri, Jani Even, Tetsushi Ikeda, Kazuhiko Shinozawa, Takahiro Miyashita, Norihiro Hagita
    IEEE International Conference on Robotics and Automation, ICRA 2015, Seattle, WA, USA, 26-30 May, 2015 6153-6159 2015年  査読有り
  • Takamasa Iio, Masahiro Shiomi, Kazuhiko Shinozawa, Katsunori Shimohara, Mitsunori Miki, Nori- hiro Hagita
    International Journal of Social Robotics 7(2) 253-263 2015年  査読有り
  • Yoichi Morales, Jani Even, Nagasrikanth Kallakuri, Tetsushi Ikeda, Kazuhiko Shinozawa, Tadahisa Kondo, Norihiro Hagita
    2014 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA) 2197-2202 2014年  査読有り
    This work introduces a 3D visibility model for comfortable autonomous vehicles. The model computes a visibility index based on the pose of the wheelchair within the environment. We correlate this index with human navigational comfort (discomfort) and we discuss the importance of modeling visibility to improve human riding comfort. The proposed approach models the 3D visual field of view combined with a two-layered environmental representation. The field of view is modeled with information from the pose of the robot, a 3D laser sensor and a two-layered environmental representation composed of a 3D geometric map with traversale area information. Human navigational discomfort was extracted from participants riding the autonomous wheelchair. Results show that there is fair correlation between poor visibility locations (e.g., blind corners) and human discomfort. The approach can model places with identical traversable characteristics but different visibility and it differentiates visibility characteristics according to traveling direction.
  • Masato SAKATA, Zeynep Yucel, Kazuhiko Shinozawa, Norihiro Hagita, Michita Imai, Michiko FURUTANI, Rumiko Matsuoka
    ACM Transaction on Management Information System 4(3) 13:1-13:21 2013年10月  査読有り
  • Takahiro Miyashita, Kazuhiko Shinozawa
    Journal of the Institute of Electronics, Information and Communication Engineers 96(8) 616-620 2013年8月  
  • Masahiro SHIOMI, Kazuhiko SHINOZAWA, Yoshifumi NAKAGAWA, Takahiro MIYASHITA, Toshio SAKAMOTO, Toshimitsu TERAKADO, Hiroshi ISHIGURO, Norihiro Hagita
    International Journal of Social Robotics 5(2) 251-262 2013年2月  査読有り
  • Koji Kamei, Tetsushi Ikeda, Masahiro Shiomi, Hiroyuki Kidokoro, Akira Utsumi, Kazuhiko Shinozawa, Takahiro Miyashita, Norihiro Hagita
    Annals of Telecommunications(Special Issue on Ubiquitous Networked Robots) 67(7) 329-340 2012年6月  査読有り
  • 城所 宏行, 亀井 剛次, 篠沢 一彦, 宮下 敬宏, 萩田紀博
    信学論 D 95-D(4) 790-798 2012年4月  査読有り
    本論文では,商業施設の店舗内における買い物中の顧客の興味に基づいた商品推薦手法を提案する.棚の前で立ち止まっている顧客は,その棚の商品に興味を示していると考えられる.また顧客は,一度の買い物で複数の棚の前の領域(停留領域)に立ち止まる.そこで顧客の興味を推定するために,顧客の購買行動のうち,棚の前での停留行動に着目した.その停留領域の組合せは,顧客の来店目的に応じて共通したものとなることが期待できる.顧客の入店時からの停留領域の時系列(停留領域系列)に基づいて,次に停留する可能性が最も高い停留領域(推薦先)を推定し,そしてその推定に基づいて,店舗内に設置した情報提示用のロボットとディジタルサイネージを用いて商品推薦コンテンツを提示する手法を提案する.商品推薦による顧客の行動変容への影響を確認するために,コンビニエンスストア環境を模した実験用店舗を構築し,本手法に基づいた商品推薦実験を行った.実験の結果,推薦に注目した17件のうち13件で顧客は推薦先を訪れており,顧客は推薦を行わない場合とは異なる行動をとっていたことが確認できた.
  • 飯尾尊優, 塩見昌裕, 篠沢一彦, 下原勝憲, 萩田紀博
    情報通信学会論文誌 53(4) 1251-1268 2012年4月  査読有り
  • Kayako Nakagawa, Masahiro Shiomi, Kazuhiko Shinozawa, Reo Matsumura, Hiroshi Ishiguro, Nori- hiro Hagita
    International Journal of Social Robotics 5(1) 5-16 2012年3月  査読有り
  • 中川佳弥子, 塩見昌裕, 篠沢一彦, 松村礼央, 石黒浩, 萩田紀博
    信学論 A 95-A(1) 136-144 2012年1月  査読有り
    近年,教育や福祉の分野において,ロボットを利用した様々なサービスに関する研究開発が進んでいる.これらの分野において,ユーザとのインタラクションの中で,学校の宿題や,健康のための運動などといった,日々のタスクに対するモチベーションを向上させるようなロボットの働きかけが有用である.接触がロボットの印象にポジティブな効果があることは過去の研究において示されているが,ロボットの能動的接触が行動にどのように影響するかは明らかにされていない.ロボットの能動的接触の効果を調べるために,我々は,非接触/受動的接触/能動的接触の3条件で,ロボットが被験者に退屈なタスクを依頼する実験を行った.実験の結果,能動的接触による依頼を行った場合,他の条件に比べてタスクパフォーマンス(達成量及び継続時間)が有意に向上した.一方で,ロボットへの印象とタスクパフォーマンスに相関は見られなかった.これらの結果より,ロボットの能動的接触がユーザのモチベーションを向上させる可能性が示された.ロボットとの接触を伴うインタラクションが想定される様々なサービスにおいて,本知見はロボットの振舞いのデザインに役立つと我々は考える.
  • 杉山治, 篠沢一彦, 今井倫太, 萩田紀博
    信学論 A 95-A(1) 136-144 2012年1月  査読有り
  • Takamasa Iio, Masahiro Shiomi, Kazuhiko Shinozawa, Takaaki Akimoto, Katsunori Shimohara, Norihiro Hagita
    International Journal of Social Robotics 3(4) 405-414 2011年9月  査読有り
  • Hiroyuki Kidokoro, Koji Kamei, Kazuhiko Shinozawa, Takahiro Miyashita, Norihiro Hagita
    UbiComp 2011: Ubiquitous Computing, 13th International Conference, UbiComp 2011, Beijing, China, September 17-21, 2011, Proceedings 569-570 2011年  査読有り
  • Koji Kamei, Tetsushi Ikeda, Hiroyuki Kidokoro, Masayuki Shiomi, Akira Utsumi, Kazuhiko Shinozawa, Takahiro Miyashita, Norihiro Hagita
    Proceedings - 2011 IEEE International Conference on Privacy, Security, Risk and Trust and IEEE International Conference on Social Computing, PASSAT/SocialCom 2011 235-241 2011年  査読有り
    Applying the technologies of a network robot system, we incorporate the recommendation methods used in Ecommerce in a retail shop in the real world. We constructed a platform for ubiquitous-networked robots that focuses on a shop environment where communication robots perform customer navigation. The platform estimates customer interests from their pre-purchasing behaviors observed by networked sensors without concrete IDs and controls visible-type communication robots in the environment to perform customer navigation. The system can perform collaborative filtering-based recommendations without excessively intruding upon the privacy of customers. Since observations and recommendations in real environments are intrinsically annoying, the robot scenarios are also important for interacting with customers as well as the system. Two types of navigation scenarios are implemented and investigated in experiments using 80 participants. The results indicate that in the cooperative navigation scenario, the participants who interacted with the communication robots located both outside and inside the shop felt friendlier toward the robots and found it easy to understand what they said. © 2011 IEEE.
  • Takamasa Iio, Masahiro Shiomi, Kazuhiko Shinozawa, Takaaki Akimoto, Katsunori Shimohara, Norihiro Hagita
    PROCEEDINGS OF THE SIXTEENTH INTERNATIONAL SYMPOSIUM ON ARTIFICIAL LIFE AND ROBOTICS (AROB 16TH '11) 585-588 2011年  査読有り
    This paper reports a new finding of a phenomenon that person's gestures or words are implicitly modified by robot's gestures or words. Previous researches focused on an implicit effect of robot's gestures on person's gestures or an implicit effect of robot's words on person's words, but they did not focused on an implicit effect of robot's gestures on person's words or an implicit effect of robot's words on person's gestures. We supposed that there was such an effect as to arise between different modalities, and we defined it as a cross-modal effect. In order to verify hypotheses about the cross-modal effect, an experiment was conducted, in which a pair of a pointing gesture and a deictic word was focused on. This result showed that participants used a pointing gesture more often when a robot used a deictic word, and participants used a deictic word more often when the robot used a pointing gesture. Therefore, person's pointing gesture was implicitly modified by robot's deictic word, and also person's deictic word was implicitly modified by robot's pointing gesture. The cross-modal effect is expected to be applied to robot's dialog design to elicit comprehensible behavior from a person.
  • 飯尾尊優, 塩見昌裕, 篠沢一彦, 秋本高明, 下原勝憲, 萩田紀博
    ヒューマンインタフェース学会 13(1) 9-21 2011年1月  査読有り
  • 中川佳弥子, 塩見昌裕, 篠沢一彦, 松村礼央, 石黒浩, 萩田紀博
    ヒューマンインタフェース学会 13(1) 31-40 2011年1月  査読有り
  • 中川佳弥子, 篠沢一彦, 松村 礼央, 石黒 浩, 萩田紀博
    ヒューマンインタフェース学会 12(3) 239-248 2010年8月  査読有り
  • 飯尾尊優, 塩見昌裕, 篠沢一彦, 宮下敬宏, 下原勝憲, 秋本高明, 萩田紀博
    情報処理学会論文誌 51(2) 277-289 2010年2月  査読有り
  • Takamasa Iio, Masahiro Shiomi, Kazuhiko Shinozawa, Takaaki Akimoto, Norihiro Hagita, Katsunori Shimohara
    Proceedings of the SICE Annual Conference 2769-2774 2010年  
    Social robots providing services in real environments have been developed recently. Such robots should appropriately recognize orders from users in the manner of communications like humans. However, their communication styles are too diverse to achieve this goal. If the robot could shape the styles, its recognition ability would increase. Entrainment has attracted attention as a phenomenon to synchronize human gestures and speech implicitly with robot gestures and speech. Previous studies have reported entrainment occurring within the same modality, but we need to clarify the cross-modality effects because humanrobot interaction is inherently multi-modal. In this paper, we define "mutual entrainment" as entrainment across different modalities and investigate its effects through a laboratory experiment. We evaluate how the frequency of human pointing gestures varies with the amount of information in robot speech, and as a result, we find that the gesture frequency increases as the amount of information decreases. The results suggest that smoother human-robot communications can be achieved by shaping human behavior through mutual entrainment. © 2010 SICE.
  • Osamu Sugiyama, Kazuhiko Shinozawa, Takaaki Akimoto, Norihiro Hagita
    SOCIAL ROBOTICS, ICSR 2010 6414 90-99 2010年  査読有り
    This paper reports the docking and metaphor effects on persuasion among multi-robot healthcare systems. The goal of our research is to make a robot friend that lives with its users and persuades them to make appropriate healthcare decisions. To realize such a robot friend, we propose a physical approach called docking as well as a contextual approach called metaphor to perform relational inheritance among multi-robot systems. We implemented a multi-robot persuasion system based on the two approaches and verified its effectiveness. The experimental results revealed that users emphasize interpersonal relationships to decide whether to follow the robot's advice when utilizing the metaphor approach, and that users emphasize robot aggressiveness when utilizing docking approach.
  • Koji Kamei, Kazuhiko Shinozawa, Tetsushi Ikeda, Akira Utsumi, Takahiro Miyashita, Norihiro Hagita
    International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction, ICMI-MLMI 2010 19:1-19:8 2010年  査読有り
    By applying network robot technologies, recommendation methods from E-Commerce are incorporated in a retail shop in the real world. We constructed an experimental shop environment where communication robots recommend specific items to the customers according to their purchasing behavior as observed by networked sensors. A recommendation scenario is implemented with three robots and investigated through an experiment. The results indicate that the participants stayed longer in front of the shelves when the communication robots tried to interact with them and were influenced to carry out similar purchasing behaviors as those observed earlier. Other results suggest that the probability of customers' zone transition can be used to anticipate their purchasing behavior. © 2010 ACM.
  • Takamasa Iio, Masahiro Shiomi, Kazuhiko Shinozawa, Takaaki Akimoto, Katsunori Shimohara, Norihiro Hagita
    SOCIAL ROBOTICS, ICSR 2010 6414 372-381 2010年  査読有り
    Social robots need to recognize the objects indicated by people to work in real environments. This paper presents the entrainment of human pointing gestures during interaction with a robot and investigated what robot gestures are important for such entrainment. We conducted a Wizard-of-Oz experiment where a person and a robot referred to objects and evaluated the entrainment frequency. The frequency was lowest when the robot just used pointing gestures, and the frequency was highest when it used both gazing and pointing gestures. These result suggest that not only robot pointing gestures but also gazing gestures affect entrainment. We conclude that the entrainment of pointing gestures might improve a robot's ability to recognize them.
  • Takamasa Iio, Masahiro Shiomi, Kazuhiko Shinozawa, Takaaki Akimoto, Katsunori Shimohara, Norihiro Hagita
    IEEE/RSJ 2010 INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS 2010) 5294-5301 2010年  査読有り
    Social robots that provide services to humans in real environments have been developed in recent years. Such a robot should appropriately recognize its users' orders through human-like communications because of user-friendliness. However, their styles of communicating are too diverse to achieve this goal. If the robot could shape their styles, its recognition ability would be improved. An entrainment, which is a phenomenon where human's behavior is synchronized with robot's behavior, can be useful for this shaping. Previous studies have reported the entrainment occurring in the same modality, but they have given little attention to entrainment across different modalities (e. g., speech and gestures). We need to consider this cross-modal effect because human-robot interaction is inherently multi-modal. In this paper, we defined "mutual entrainment" as the entrainment across different modalities and investigated the effect of it through a laboratory experiment. We evaluate how the frequency of human pointing gestures varies with the amount of information in robot speech, and as a result, we found that the gesture frequency increased as the amount of information decreased. The results suggest that smoother human-robot communications can be achieved by shaping human behavior through mutual entrainment.
  • Masahiro shiomi, Kayako Nakagawa, Reo Matsumura, Kazuhiko Shinozawa, Hiroshi Ishiguro, Norihiro Hagita
    Proceedings of International Conference on RObots and Systems (IROS2010) 3899-3904 2010年  査読有り
  • Kazuhiko Shinozawa, Norihiro Hagita, Michiko Furutani, Rumiko Matsuoka
    Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 5633 44-50 2009年  査読有り
    Our periodic health examination often describes whether each examination item in blood and urine takes in the reference range of each examination item and a simple summary report on checks in everyday life and the possibility of suspicious diseases. However, it uses n variable items such as AST(GOT), ALT(GPT) which are less correlated, and often includes expensive tumor markers. Therefore, this paper proposes a data mining method for finding hidden relationships between these items in order to reduce the examination fee and giving a report depending on individuals. Since low correlation coefficients are shown in most pairs of items over all clients, a set of item's values in consecutive health examinations of each client is investigated for data mining. Four groups are formed according to the frequency taking outside the reference range in an item for three consecutive examinations, and average values of the other items included in each group are calculated in all pairs of items. The experiment results for three consecutive health examinations show that a lot of item pairs have positive or negative correlations between different frequencies with an item and the averages with the other item despite the fact that their correlation coefficients are small. The result shows both possible reducting of reducing the examination fee as inexpensive as possible and the possibility of a health-care report reflecting individuals. © 2009 Springer Berlin Heidelberg.
  • Takamasa Iio, Masahiro Shiomi, Kazuhiko Shinozawa, Takahiro Miyashita, Takaaki Akimoto, Norihiro Hagita
    2009 IEEE-RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS 3727-3734 2009年  査読有り
    A communication robot must recognize a referred-to object to support us in daily life. However, using our wide human vocabulary, we often refer to objects in terms that are incomprehensible to the robot. This paper focuses on lexical entrainment to solve this problem. Lexical entrainment is the phenomenon of people tending to adopt the terms of their interlocutor. While this has been well studied in human-computer interaction, few published papers have approached it in human-robot interaction. To investigate how lexical entrainment occurs in human-robot interaction, we conduct experiments where people instruct the robot to move objects. Our results show that two types of lexical entrainment occur in human-robot interaction. We also discuss the effects of the state of objects on lexical entrainment. Finally, we developed a test bed system for recognizing a referred-to object on the basis of knowledge from our experiments.
  • Kayako Nakagawa, Kazuhiko Shinozawa, Hiroshi Ishiguro, Takaaki Akimoto, Norihiro Hagita
    2009 IEEE-RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS 5003-+ 2009年  査読有り
    In human-robot interaction, robots often fail to h ad humans to intended reactions due to their limited ability to express affective nuances. In this paper, we propose a n otion modification method that combines affective nuances with arbitrary motions of humanoid robots to induce humans to intended reactions by expressing affective states. The method is applicable to various humanoid robots that differ in degrees of freedom or appearances, and the affective nuances are parametrically expressed in a two-dimensional model comprised of valence and arousal. The experimental results showed that the desired affective nuances could be expressed by our method, but it also suggested some limitations. We believe that the method will contribute to interactive systems in which robots can communicate with appropriate expressions in various Contexts.
  • 光永法明, 宮下善太, 篠沢一彦, 宮下敬宏, 石黒浩, 萩田紀博
    日本ロボット学会誌 26(7) 94-102 2008年10月  査読有り
  • Noriaki Mitsunaga, Zenta Miyashita, Kazuhiko Shinozawa, Takahiro Miyashita, Hiroshi Ishiguro, Norihiro Hagita
    2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, September 22-26, 2008, Acropolis Convention Center, Nice, France 3336-3343 2008年  査読有り
  • 伊藤 禎宣, 岩澤 昭一郎, 土川 仁, 篠沢 一彦, 角 康之, 間瀬 健二, 鳥山 朋二, 小暮 潔, 萩田 紀博
    情報処理学会論文誌 49(1) 83-95 2008年1月  査読有り
  • 吉川 雄一郎, 篠沢 一彦, 石黒 浩, 萩田 紀博, 宮本 孝典
    情報処理学会論文誌 48(3) 1284-1293 2007年3月  査読有り
  • Kazuhiko Shinozawa, Takahiro Miyashita, Masayuki Kakio, Norihiro Hagita
    HUMANOIDS: 2007 7TH IEEE-RAS INTERNATIONAL CONFERENCE ON HUMANOID ROBOTS 366-370 2007年  査読有り
    We investigate bow users specify physical objects in human-humanoid interaction and discuss confirmation ways for humanoid robots. In the future such robots are expected to serve as daily life communication partners in similar ways as family members or friends. We conducted an experiment in which users asked a robot to retrieve books to investigate their methods of specifying location. Preliminary results show that half of a user's specifying method for a book can only be understood with speech recognition. However, combining pointing and perfect speech recognition can identify almost all user demands. However, since humanoid robots need confirmation in actual situations due to imprecise pointing or noise problems, we considered the confirmation behaviors of robots. Experiment results suggest that confirmation that includes both speaking a book's title and pointing provides a user more specification ways and reduces workload.
  • Hiroko Tochigi, Kazuhiko Shinozawa, Norihiro Hagita
    ICMI'07: PROCEEDINGS OF THE NINTH INTERNATIONAL CONFERENCE ON MULTIMODAL INTERFACES 279-284 2007年  査読有り
    This paper investigates the effect on user impressions of the body direction of it stuffed doll robot in an animation system. Many systems that combine a computer display with a robot have been developed, and one of their applications is entertainment, for example, an animation system. In these systems, the robot, as a 3D agent, can be more effective than a 2D agent in helping the user enjoy the animation experience by using spatial characteristics, such as body direction, as a means of expression. The direction in which the robot faces, i.e., towards the human or towards the display. is investigated here. User impressions from 25 subjects were examined. The experiment results show that the robot facing the display together with a user is effective for eliciting good feelings from the user, regardless of the user's personality characteristics. Results also suggest that extroverted subjects tend to have a better feeling towards a robot facing the user than introverted ones.
  • Yuichiro Yoshikawa, Kazuhiko Shinozawa, Hiroshi Ishiguro, Norihiro Hagita, Takanori Miyamoto
    2006 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-12 4564-+ 2006年  
    In face-to-face communication, eyes play a central role, for example in directing attention and regulating turn-taking. For this reason, it has been a central topic in several fields of interaction study. Although many psychology findings have encouraged previous work in both human-computer and human-robot interaction studies, so far there have been few explorations on how to move the agent's eye, including when to move it, for communication. Therefore, it is this topic we address in this study. The impression a person forms from an interaction is strongly influenced by the degree to which their partner's gaze direction correlates with their own. In this paper, we propose methods of controlling a robot's gaze responsively to its partner's gaze and confirm the effect of this on the feeling of being looked at, which is considered to be the basis of conveying impressions using gaze in face-to-face interaction experiments. Furthermore, an additional preliminary experiment with an on screen agent shows the possibility of using blinking behaviour as another modality for responding to a partner.
  • Kazuhiko Shinozawa, Futoshi Naya, Junji Yamato, Kiyoshi Kogure
    International Journal of Human-Computer Studies 62(2) 267-279 2005年3月  査読有り
  • Kazuhiko Shinozawa, Futoshi Naya, Kiyoshi Kogure, Junji Yamato
    2004 IEEE/RSJ International Conference on Intelligent Robots and Systems, Sendai, Japan, September 28 - October 2, 2004 1908-1913 2004年  査読有り
  • 納谷 太, 篠沢 一彦, 大和 淳司, 小暮 潔
    信学論 D-II 85(4) 613-621 2002年4月  査読有り
    本論文では,人とロボットとの多様な触覚インタラクションを実現するため,人の「強くたたく」「軽くたたく」「ひっかく」「なでる」「くすぐる」の5種類の触行動を実時間で識別する手法を提案する.時間・空間分解能が高い圧力分布センサを用い,実時間性を重視した識別特徴量として,センサにかかる総和圧力,加圧面積のピーク値,及びこれらの時間変化量の4次元特徴量を定義し,k近隣法を用いる.識別実験の結果,5クラス平均識別率67.2%を得た.一方,5クラスの特徴分布は,個人間で大きく重なりがあるため,特定の個人に対する識別率の向上が困難であることがわかった.この問題を解決するため,個人によらず識別率の高い「強くたたく」を,その直前に取られた触行動が誤りであることを示す負の強化信号として用いることにより,識別を個人に適応させる手法を提案する.シミュレーション実験により,識別率が96.8%まで向上することを示す.
  • Junji Yamato, Kazuhiko Shinozawa, Futoshi Naya, Kiyoshi Kogure
    Human-Computer Interaction INTERACT '01: IFIP TC13 International Conference on Human-Computer Interaction(INTERACT) 690-691 2001年  
  • 篠沢 一彦, 下原 勝憲, 曽根原 登, 徳永 幸夫
    信学論 D-II, 82(7) 1190-1198 1999年7月  査読有り
  • 今井 倫太, 篠沢 一彦
    信学論 D-II 81(11) 2635-2644 1998年11月  査読有り
    本論文では, 自律移動ロボット用対話システムSpondiaについて述べる.一般的に, ロボットとの対話では, ユーザが, タスクを実行するための命令を与えることが多い.一方, 人間との自然なインタラクションを目標とする対話システムでは, 命令の実行だけでなく, 呼びかけ的な発話に対応する能力が望まれる.しかし, 呼びかけ的発話には, 行動と無関係な表現が多く, 発話に対する行動を生成しようにも, 行動命令として直に反映できないといった問題がある.そこで, Spondiaでは, 呼びかけ的発話と行動を結び付ける自発的注意機構を提案する.自発的注意機構は, 発話からの行動生成が, あらかじめ用意された制約だけに依存しないように非線形ネットワークで構成されている.結果, 生成される注意は, 発話による制約の方向に引き込まれる一方で, 一つの方向にとどまらない挙動を示す, 自発的注意機構を介した発話と行動の結び付けによって, Spondiaは, 呼びかけ的発話と行動の関係を多用にし, 命令形式とは違った人間とロボットの対話を可能にする.また, 本論文では, 自発的注意機構の挙動をいくつかの状況で調べ, 特性について評価する.

MISC

 8
  • 西尾拓真, 宮下敬宏, 宮下敬宏, 篠澤一彦, 篠澤一彦, 篠澤一彦, 萩田紀博, 萩田紀博, 萩田紀博, 安藤英由樹, 安藤英由樹
    日本バーチャルリアリティ学会大会論文集(CD-ROM) 28th 2023年  
  • 大野凪, 大野凪, 宮下敬宏, 宮下敬宏, 篠澤一彦, 篠澤一彦, 篠澤一彦, 萩田紀博, 萩田紀博, 安藤英由樹, 安藤英由樹
    日本バーチャルリアリティ学会大会論文集(CD-ROM) 27th 2022年  
  • Kayako Nakagawa, Masahiro Shiomi, Kazuhiko Shinozawa, Reo Matsumura, Hiroshi Ishiguro, Norihiro Hagita
    PROCEEDINGS OF THE 6TH ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTIONS (HRI 2011) pp.465-472 465-472 2011年  
    This paper presents the effect of a robot's active touch for improving people's motivation. For services in the education and healthcare fields, a robot might be useful for improving the motivation of performing such repetitive and monotonous tasks as exercising or taking medicine. Previous research demonstrated with a robot the effect of user touch on improving its impressions, but they did not clarify whether a robot's touch, especially an active touch, has enough influence on people's motive. We implemented an active touch behavior and experimentally investigated its effect on motivation. In the experiment, a robot requested participants to perform a monotonous task with a robot's active touch, a passive touch, or no touch. The result of experiment showed that an active touch by a robot increased the number of working actions and the amount of working time for the task. This suggests that a robot's active touch can support people to improve their motivation. We believe that a robot's active touch behavior is useful for such robot's services as education and healthcare.
  • 西尾 修一, 神田 崇行, 宮下 敬宏, 篠沢 一彦, 萩田 紀博, 山崎 達也
    日本ロボット学会誌 = Journal of Robotics Society of Japan 26(5) 427-430 2008年7月15日  
  • 飯尾尊優, 篠沢一彦, 塩見昌裕, 宮下敬宏, 秋本高明, 萩田紀博
    日本ロボット学会学術講演会予稿集(CD-ROM) 26th 2008年  

講演・口頭発表等

 1

共同研究・競争的資金等の研究課題

 3