Osaka Kyoiku University Researcher Information
日本語 | English
Curriculum Vitaes
Profile Information
- Affiliation
- Professor, Division of Math, Sciences, and Information Technology in Education, Osaka Kyoiku University
- Degree
- 工学士(慶應義塾大学)工学修士(慶應義塾大学)博士(情報学)(京都大学)
- Researcher number
- 80395160
- J-GLOBAL ID
- 202101010709748024
- researchmap Member ID
- R000029802
Committee Memberships
1-
2013
Awards
3Papers
53-
Advanced Robotics, 34(20) 1309-1323, Oct, 2020 Peer-reviewed
-
Robotics and Autonomous Systems, 108 13-26, May, 2018 Peer-reviewed
-
Journal of Robotics and Mechatronics, 28(1) 107-108, 2016 Peer-reviewedWe developed robovie-mR2, a desktop-sized communication robot, in which we incorporated a “Kawaii” design to create a familiar appearance because it is an important acceptance factor for both researchers and users. It can interact with people using multiple sensors, including a camera and microphones, expressive gestures, and an information display. We believe that robovie-mR2 will become a useful robot platform to advance the research of human-robot interaction. We also give examples of human-robot interaction research works that use robovie-mR2.
-
International Journal of Social Robotics, 9(1) 5-15, 2016 Peer-reviewedThe paper investigated the effects on a person being touched by a robot to motivate her. Human science literature has shown that touches to others facilitate efforts of touched people. On the other hand, in the human--robot interaction research field, past research has failed to focus on the effects of such touches from robots to people. A few studies reported negative impressions from people, even if a touch from a person to a robot left a positive impression. To reveal whether robot touch positively affects humans, we conducted an experiment where a robot requested participants to perform a simple and monotonous task with/without touch interaction between a robot and participants. Our experiment's result showed that both touches from the robot to the participants and touches from the participants to the robot facilitated their efforts.
-
PROCEEDINGS OF THE 2015 IEEE INTERNATIONAL SYMPOSIUM ON MIXED AND AUGMENTED REALITY WORKSHOPS, 40-46, 2015 Peer-reviewedA lot of efforts aim to realize a society where autonomous vehicles become general transportation in industrial, political and academic fields. In order to make autonomous vehicles more familiar with the public, it is necessary to develop not only advanced auto driving control but also comfortable environments for the passengers. This paper proposes our trial to improve comfort of passengers on autonomous vehicles. We developed an experimental vehicle equipping Mixed Reality (MR) display system which aims to reduce anxiety using visual factors. Our proposed system visualizes a road surface that is out of the passenger's field of view by projecting the see-through image on the dashboard. Moreover, it overlays the computer graphics of the wheel trajectories on the displayed image using MR so that the passengers can easily confirm the auto driving control is working correctly. The displayed images enable passengers to comprehend the road condition and the expected vehicle route out of the passenger's field of view. We investigated change of the mental stress by introducing methods for measuring physiological indices, heart rate variability and sweat information.
-
IEEE International Conference on Vehicular Electronics and Safety, ICVES 2015, Yokohama, Japan, November 5-7, 2015, 158-163, 2015 Peer-reviewed
-
2015 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2015, Hamburg, Germany, September 28 - October 2, 2015, 5763-5769, 2015 Peer-reviewed
-
IEEE International Conference on Robotics and Automation, ICRA 2015, Seattle, WA, USA, 26-30 May, 2015, 6153-6159, 2015 Peer-reviewed
-
International Journal of Social Robotics, 7(2) 253-263, 2015 Peer-reviewed
-
2014 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2197-2202, 2014 Peer-reviewedThis work introduces a 3D visibility model for comfortable autonomous vehicles. The model computes a visibility index based on the pose of the wheelchair within the environment. We correlate this index with human navigational comfort (discomfort) and we discuss the importance of modeling visibility to improve human riding comfort. The proposed approach models the 3D visual field of view combined with a two-layered environmental representation. The field of view is modeled with information from the pose of the robot, a 3D laser sensor and a two-layered environmental representation composed of a 3D geometric map with traversale area information. Human navigational discomfort was extracted from participants riding the autonomous wheelchair. Results show that there is fair correlation between poor visibility locations (e.g., blind corners) and human discomfort. The approach can model places with identical traversable characteristics but different visibility and it differentiates visibility characteristics according to traveling direction.
-
ACM Transaction on Management Information System, 4(3) 13:1-13:21, Oct, 2013 Peer-reviewedCommon periodical health check-ups include several clinical test items with affordable cost. However, these standard tests do not directly indicate signs of most lifestyle diseases. In order to detect such diseases, a number of additional specific clinical tests are required, which increase the cost of the health check-up. This study aims to enrich our understanding of the common health check-ups and proposes a way to estimate the signs of several lifestyle diseases based on the standard tests in common examinations without performing any additional specific tests. In this manner, we enable a diagnostic process, where the physician may prefer to perform or avoid a costly test according to the estimation carried out through a set of common affordable tests. To that end, the relation between standard and specific test results is modeled with a multivariate kernel density estimate. The condition of the patient regarding a specific test is assessed following a Bayesian framework. Our results indicate that the proposed method achieves an overall estimation accuracy of 84%. In addition, an outstanding estimation accuracy is achieved for a subset of high-cost tests. Moreover, comparison with standard artificial intelligence methods suggests that our algorithm outperforms the conventional methods. Our contributions are as follows: (i) promotion of affordable health check-ups, (ii) high estimation accuracy in certain tests, (iii) generalization capability due to ease of implementation on different platforms and institutions, (iv) flexibility to apply to various tests and potential to improve early detection rates.
-
Journal of the Institute of Electronics, Information and Communication Engineers, 96(8) 616-620, Aug, 2013
-
International Journal of Social Robotics, 5(2) 251-262, Feb, 2013 Peer-reviewed
-
Annals of Telecommunications(Special Issue on Ubiquitous Networked Robots), 67(7) 329-340, Jun, 2012 Peer-reviewed
-
International Journal of Social Robotics, 5(1) 5-16, Mar, 2012 Peer-reviewed
-
International Journal of Social Robotics, 3(4) 405-414, Sep, 2011 Peer-reviewed
-
UbiComp 2011: Ubiquitous Computing, 13th International Conference, UbiComp 2011, Beijing, China, September 17-21, 2011, Proceedings, 569-570, 2011 Peer-reviewed
-
Proceedings - 2011 IEEE International Conference on Privacy, Security, Risk and Trust and IEEE International Conference on Social Computing, PASSAT/SocialCom 2011, 235-241, 2011 Peer-reviewedApplying the technologies of a network robot system, we incorporate the recommendation methods used in Ecommerce in a retail shop in the real world. We constructed a platform for ubiquitous-networked robots that focuses on a shop environment where communication robots perform customer navigation. The platform estimates customer interests from their pre-purchasing behaviors observed by networked sensors without concrete IDs and controls visible-type communication robots in the environment to perform customer navigation. The system can perform collaborative filtering-based recommendations without excessively intruding upon the privacy of customers. Since observations and recommendations in real environments are intrinsically annoying, the robot scenarios are also important for interacting with customers as well as the system. Two types of navigation scenarios are implemented and investigated in experiments using 80 participants. The results indicate that in the cooperative navigation scenario, the participants who interacted with the communication robots located both outside and inside the shop felt friendlier toward the robots and found it easy to understand what they said. © 2011 IEEE.
-
PROCEEDINGS OF THE SIXTEENTH INTERNATIONAL SYMPOSIUM ON ARTIFICIAL LIFE AND ROBOTICS (AROB 16TH '11), 585-588, 2011 Peer-reviewedThis paper reports a new finding of a phenomenon that person's gestures or words are implicitly modified by robot's gestures or words. Previous researches focused on an implicit effect of robot's gestures on person's gestures or an implicit effect of robot's words on person's words, but they did not focused on an implicit effect of robot's gestures on person's words or an implicit effect of robot's words on person's gestures. We supposed that there was such an effect as to arise between different modalities, and we defined it as a cross-modal effect. In order to verify hypotheses about the cross-modal effect, an experiment was conducted, in which a pair of a pointing gesture and a deictic word was focused on. This result showed that participants used a pointing gesture more often when a robot used a deictic word, and participants used a deictic word more often when the robot used a pointing gesture. Therefore, person's pointing gesture was implicitly modified by robot's deictic word, and also person's deictic word was implicitly modified by robot's pointing gesture. The cross-modal effect is expected to be applied to robot's dialog design to elicit comprehensible behavior from a person.
-
ヒューマンインタフェース学会, 12(3) 239-248, Aug, 2010 Peer-reviewed
-
Proceedings of the SICE Annual Conference, 2769-2774, 2010Social robots providing services in real environments have been developed recently. Such robots should appropriately recognize orders from users in the manner of communications like humans. However, their communication styles are too diverse to achieve this goal. If the robot could shape the styles, its recognition ability would increase. Entrainment has attracted attention as a phenomenon to synchronize human gestures and speech implicitly with robot gestures and speech. Previous studies have reported entrainment occurring within the same modality, but we need to clarify the cross-modality effects because humanrobot interaction is inherently multi-modal. In this paper, we define "mutual entrainment" as entrainment across different modalities and investigate its effects through a laboratory experiment. We evaluate how the frequency of human pointing gestures varies with the amount of information in robot speech, and as a result, we find that the gesture frequency increases as the amount of information decreases. The results suggest that smoother human-robot communications can be achieved by shaping human behavior through mutual entrainment. © 2010 SICE.
-
SOCIAL ROBOTICS, ICSR 2010, 6414 90-99, 2010 Peer-reviewedThis paper reports the docking and metaphor effects on persuasion among multi-robot healthcare systems. The goal of our research is to make a robot friend that lives with its users and persuades them to make appropriate healthcare decisions. To realize such a robot friend, we propose a physical approach called docking as well as a contextual approach called metaphor to perform relational inheritance among multi-robot systems. We implemented a multi-robot persuasion system based on the two approaches and verified its effectiveness. The experimental results revealed that users emphasize interpersonal relationships to decide whether to follow the robot's advice when utilizing the metaphor approach, and that users emphasize robot aggressiveness when utilizing docking approach.
-
International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction, ICMI-MLMI 2010, 19:1-19:8, 2010 Peer-reviewedBy applying network robot technologies, recommendation methods from E-Commerce are incorporated in a retail shop in the real world. We constructed an experimental shop environment where communication robots recommend specific items to the customers according to their purchasing behavior as observed by networked sensors. A recommendation scenario is implemented with three robots and investigated through an experiment. The results indicate that the participants stayed longer in front of the shelves when the communication robots tried to interact with them and were influenced to carry out similar purchasing behaviors as those observed earlier. Other results suggest that the probability of customers' zone transition can be used to anticipate their purchasing behavior. © 2010 ACM.
-
SOCIAL ROBOTICS, ICSR 2010, 6414 372-381, 2010 Peer-reviewedSocial robots need to recognize the objects indicated by people to work in real environments. This paper presents the entrainment of human pointing gestures during interaction with a robot and investigated what robot gestures are important for such entrainment. We conducted a Wizard-of-Oz experiment where a person and a robot referred to objects and evaluated the entrainment frequency. The frequency was lowest when the robot just used pointing gestures, and the frequency was highest when it used both gazing and pointing gestures. These result suggest that not only robot pointing gestures but also gazing gestures affect entrainment. We conclude that the entrainment of pointing gestures might improve a robot's ability to recognize them.
-
IEEE/RSJ 2010 INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS 2010), 5294-5301, 2010 Peer-reviewedSocial robots that provide services to humans in real environments have been developed in recent years. Such a robot should appropriately recognize its users' orders through human-like communications because of user-friendliness. However, their styles of communicating are too diverse to achieve this goal. If the robot could shape their styles, its recognition ability would be improved. An entrainment, which is a phenomenon where human's behavior is synchronized with robot's behavior, can be useful for this shaping. Previous studies have reported the entrainment occurring in the same modality, but they have given little attention to entrainment across different modalities (e. g., speech and gestures). We need to consider this cross-modal effect because human-robot interaction is inherently multi-modal. In this paper, we defined "mutual entrainment" as the entrainment across different modalities and investigated the effect of it through a laboratory experiment. We evaluate how the frequency of human pointing gestures varies with the amount of information in robot speech, and as a result, we found that the gesture frequency increased as the amount of information decreased. The results suggest that smoother human-robot communications can be achieved by shaping human behavior through mutual entrainment.
-
IEEE/RSJ 2010 INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS 2010), 3899-3904, 2010 Peer-reviewedThis paper reports the persuasion effect of a robot's whispering behavior that consists of a whispering gesture and a request made in a small voice. Whispering gestures naturally make close distance and create warmth feelings with subjects, and requests in quiet voices with whispering gestures also create familiar impressions, which are effective factors of persuasion. We believe that such physical behavior as whispering is one persuasion advantage held by real robots over computers. We conducted a between-subjects experiment to investigate the effectiveness of these two factors on persuasion. In the experiment, the robot requests an annoying task of the subjects; writing as many multiplication table equations as possible. As a result, whispering gestures significantly increased the working time and the number of equations. On the other hand, the loudness of the voice in the request had no effect. We believe the results indicate the effectiveness of physical behavior for persuasion in human-robot interaction.
-
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 5633 44-50, 2009 Peer-reviewedOur periodic health examination often describes whether each examination item in blood and urine takes in the reference range of each examination item and a simple summary report on checks in everyday life and the possibility of suspicious diseases. However, it uses n variable items such as AST(GOT), ALT(GPT) which are less correlated, and often includes expensive tumor markers. Therefore, this paper proposes a data mining method for finding hidden relationships between these items in order to reduce the examination fee and giving a report depending on individuals. Since low correlation coefficients are shown in most pairs of items over all clients, a set of item's values in consecutive health examinations of each client is investigated for data mining. Four groups are formed according to the frequency taking outside the reference range in an item for three consecutive examinations, and average values of the other items included in each group are calculated in all pairs of items. The experiment results for three consecutive health examinations show that a lot of item pairs have positive or negative correlations between different frequencies with an item and the averages with the other item despite the fact that their correlation coefficients are small. The result shows both possible reducting of reducing the examination fee as inexpensive as possible and the possibility of a health-care report reflecting individuals. © 2009 Springer Berlin Heidelberg.
-
2009 IEEE-RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, 3727-3734, 2009 Peer-reviewedA communication robot must recognize a referred-to object to support us in daily life. However, using our wide human vocabulary, we often refer to objects in terms that are incomprehensible to the robot. This paper focuses on lexical entrainment to solve this problem. Lexical entrainment is the phenomenon of people tending to adopt the terms of their interlocutor. While this has been well studied in human-computer interaction, few published papers have approached it in human-robot interaction. To investigate how lexical entrainment occurs in human-robot interaction, we conduct experiments where people instruct the robot to move objects. Our results show that two types of lexical entrainment occur in human-robot interaction. We also discuss the effects of the state of objects on lexical entrainment. Finally, we developed a test bed system for recognizing a referred-to object on the basis of knowledge from our experiments.
-
2009 IEEE-RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, 5003-+, 2009 Peer-reviewedIn human-robot interaction, robots often fail to h ad humans to intended reactions due to their limited ability to express affective nuances. In this paper, we propose a n otion modification method that combines affective nuances with arbitrary motions of humanoid robots to induce humans to intended reactions by expressing affective states. The method is applicable to various humanoid robots that differ in degrees of freedom or appearances, and the affective nuances are parametrically expressed in a two-dimensional model comprised of valence and arousal. The experimental results showed that the desired affective nuances could be expressed by our method, but it also suggested some limitations. We believe that the method will contribute to interactive systems in which robots can communicate with appropriate expressions in various Contexts.
-
2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, September 22-26, 2008, Acropolis Convention Center, Nice, France, 3336-3343, 2008 Peer-reviewed
-
HUMANOIDS: 2007 7TH IEEE-RAS INTERNATIONAL CONFERENCE ON HUMANOID ROBOTS, 366-370, 2007 Peer-reviewedWe investigate bow users specify physical objects in human-humanoid interaction and discuss confirmation ways for humanoid robots. In the future such robots are expected to serve as daily life communication partners in similar ways as family members or friends. We conducted an experiment in which users asked a robot to retrieve books to investigate their methods of specifying location. Preliminary results show that half of a user's specifying method for a book can only be understood with speech recognition. However, combining pointing and perfect speech recognition can identify almost all user demands. However, since humanoid robots need confirmation in actual situations due to imprecise pointing or noise problems, we considered the confirmation behaviors of robots. Experiment results suggest that confirmation that includes both speaking a book's title and pointing provides a user more specification ways and reduces workload.
-
ICMI'07: PROCEEDINGS OF THE NINTH INTERNATIONAL CONFERENCE ON MULTIMODAL INTERFACES, 279-284, 2007 Peer-reviewedThis paper investigates the effect on user impressions of the body direction of it stuffed doll robot in an animation system. Many systems that combine a computer display with a robot have been developed, and one of their applications is entertainment, for example, an animation system. In these systems, the robot, as a 3D agent, can be more effective than a 2D agent in helping the user enjoy the animation experience by using spatial characteristics, such as body direction, as a means of expression. The direction in which the robot faces, i.e., towards the human or towards the display. is investigated here. User impressions from 25 subjects were examined. The experiment results show that the robot facing the display together with a user is effective for eliciting good feelings from the user, regardless of the user's personality characteristics. Results also suggest that extroverted subjects tend to have a better feeling towards a robot facing the user than introverted ones.
-
2006 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-12, 4564-+, 2006In face-to-face communication, eyes play a central role, for example in directing attention and regulating turn-taking. For this reason, it has been a central topic in several fields of interaction study. Although many psychology findings have encouraged previous work in both human-computer and human-robot interaction studies, so far there have been few explorations on how to move the agent's eye, including when to move it, for communication. Therefore, it is this topic we address in this study. The impression a person forms from an interaction is strongly influenced by the degree to which their partner's gaze direction correlates with their own. In this paper, we propose methods of controlling a robot's gaze responsively to its partner's gaze and confirm the effect of this on the feeling of being looked at, which is considered to be the basis of conveying impressions using gaze in face-to-face interaction experiments. Furthermore, an additional preliminary experiment with an on screen agent shows the possibility of using blinking behaviour as another modality for responding to a partner.
-
International Journal of Human-Computer Studies, 62(2) 267-279, Mar, 2005 Peer-reviewed
-
2004 IEEE/RSJ International Conference on Intelligent Robots and Systems, Sendai, Japan, September 28 - October 2, 2004, 1908-1913, 2004 Peer-reviewed
-
Human-Computer Interaction INTERACT '01: IFIP TC13 International Conference on Human-Computer Interaction(INTERACT), 690-691, 2001
Misc.
8-
日本バーチャルリアリティ学会大会論文集(CD-ROM), 28th, 2023
-
PROCEEDINGS OF THE 6TH ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTIONS (HRI 2011), pp.465-472 465-472, 2011This paper presents the effect of a robot's active touch for improving people's motivation. For services in the education and healthcare fields, a robot might be useful for improving the motivation of performing such repetitive and monotonous tasks as exercising or taking medicine. Previous research demonstrated with a robot the effect of user touch on improving its impressions, but they did not clarify whether a robot's touch, especially an active touch, has enough influence on people's motive. We implemented an active touch behavior and experimentally investigated its effect on motivation. In the experiment, a robot requested participants to perform a monotonous task with a robot's active touch, a passive touch, or no touch. The result of experiment showed that an active touch by a robot increased the number of working actions and the amount of working time for the task. This suggests that a robot's active touch can support people to improve their motivation. We believe that a robot's active touch behavior is useful for such robot's services as education and healthcare.
-
Journal of the Robotics Society of Japan, 26(5) 427-430, Jul 15, 2008
-
日本ロボット学会学術講演会予稿集(CD-ROM), 26th, 2008
Presentations
1Research Projects
3-
Grants-in-Aid for Scientific Research, Japan Society for the Promotion of Science, Apr, 2016 - Mar, 2019
-
Grants-in-Aid for Scientific Research, Japan Society for the Promotion of Science, Jul, 2014 - Mar, 2019
-
Grants-in-Aid for Scientific Research, Japan Society for the Promotion of Science, Apr, 2012 - Mar, 2015