Osaka Kyoiku University Researcher Information
日本語 | English
研究者業績
基本情報
- 所属
- 大阪教育大学 理数情報教育系 准教授
- 学位
- 修士(工学)(大阪大学大学院工学研究科)博士(工学)(大阪大学大学院工学研究科)
- J-GLOBAL ID
- 200901077951261581
- researchmap会員ID
- 1000244783
- 外部リンク
研究分野
5経歴
8-
2020年4月 - 現在
-
2011年4月 - 2020年3月
-
2008年4月 - 2011年3月
-
2008年4月 - 2010年3月
-
2004年6月 - 2008年3月
学歴
1-
1997年4月 - 2003年3月
受賞
6論文
32-
大阪教育大学紀要. 総合教育科学 71 497-506 2023年2月28日 筆頭著者最終著者責任著者
-
大阪教育大学紀要. 総合教育科学 70 395-404 2022年2月28日 筆頭著者最終著者責任著者
-
大阪教育大学紀要. 人文社会科学・自然科学 68 149-155 2020年2月29日 査読有り探求学習や総合学習などにおいて,分析装置を自作して実験することは児童生徒の思考力向上に有用である。本研究では簡易比色計に着目し,受光部にフォトトランジスタ,発光部にLEDを用いた簡易比色計の製作とその評価を行った。本研究で製作した簡易比色計は,分光光度計(520 nm)で測定した結果と同様,Acid Red 27水溶液の濃度と計算により得られた吸光度とに直線性がみられ,さらに濁度計としても活用可能であることがわかった。活用例として市販粉糖に含まれるでんぷんの含有量試験を実施した結果,分光光度計により得られた結果と0.1%以内の差になった。従って本研究で製作した簡易比色計の有用性が示唆された。
-
情報処理学会論文誌教育とコンピュータ(TCE) 3(1) 53-63 2017年2月22日 査読有り筆頭著者最終著者責任著者タブレット端末が普及するにともない,パソコンではなくタブレット上でのプログラミング環境も増えつつある.最近ではマイクロコントローラあるいはマイクロコンピュータを省略してマイコンとよばれる1チップから数チップで計算機を構築できるコンピュータが,家電,車といった製品の中だけでなく,趣味の電子工作や学校でのプログラミングの学習にも使われている.しかし,そのプログラミングに使えるタブレット端末上のビジュアルプログラミング環境はみあたらない.もし,そのような環境があればマイコンや電子工作に親しむ層が増えると期待される.ところで,マイコンにインタプリタを載せればプログラミング環境にコンパイラは不要であり,デバッグ情報をインタプリタから提供すれば,タブレット端末のような資源の限られたコンピュータでもプログラムの動作状態やマイコンの入出力といった情報を表示するプログラミング環境を実現しやすい.そこで,マイコンにインタプリタを搭載し,そのプログラムを開発するタブレット端末用ビジュアルプログラミング環境aiBlocksを実現したので,初めてプログラミングと電子工作に取り組んだ初心者によるaiBlocksの評価とともに報告する.In recent years, tablet PCs are becoming popular and number of visual programming environments on them are increasing. Meanwhile, micro controllers, small computers that consist of single or a few ICs, are becoming popular among hobyists and programming education. We beleive that a visual programming environment to build a micro controller program on a tablet PC makes more pepole to familar with micro controllers and electronical circuits. However, no such visual programming environment on tablet PC has been published as far as we know. Then, we have developed such an environment named "aiBlocks" on Android OS. In this paper, we report how we developed aiBlocks and how beginners used it.
-
Human-Robot Interaction in Social Robotics 2017年
-
デジタルプラクティス 6(2) 123-128 2015年4月15日 査読有り筆頭著者最終著者責任著者学校での理科教育では棒温度計や可動コイル形電圧計・電流計といった計器が広く用いられている.一方で表示自由度の高い液晶等の表示パネルの価格が低下し利用しやすくなったことから,そういった計器の表示の置き換えが学校外では進んでいる.現在,タブレット端末が学校へも普及しつつある.タブレット端末の表示自由度は従来の計器よりも高い.本研究では,その特長を活かし,数値,アナログメータ,棒温度計を模擬,グラフ(オシロスコープ)形式による表示を実現するiTesterを開発したので報告する.本報告では,まずiTesterの開発経緯を述べる.そして,実際の小中学校での利用結果から,表示器としてのタブレット端末の利用が1)視認性の面で従来の計器よりも学校教育現場に適していること,2)従来より精度の高い測定の可能性と,3)教育の質的な変化をもたらす可能性を報告し,支援学校での利用結果から,4)計器のユニバーサルデザインの実現について報告する.
-
日本ロボット学会誌 28(9) 1110-1119 2010年11月15日 査読有りIn this paper, we report the importance of the reactive behaviors of humanoid robots against human actions for smooth communication. We hypothesize that the reactive behaviors of robots play an important role in achieving human-like communication between humans and robots since the latter need to be recognized by the former as communication partners. To evaluate this hypothesis, we conducted psychological experiments in which we presented subjects with four types of reactive behaviors resulting from pushing a wheeled inverted-pendulum-type humanoid robot. From the experiment, we found that subject's impressions to the robot regarding extroversion and neuroticism changed by the robot's reactive behaviors. We also discuss the reasons for such changes in impressions by comparing the robot's and human reactive behavior.
-
日本ロボット学会誌 26(7) 812-820 2008年10月15日 査読有り筆頭著者最終著者責任著者In the near future, robots are expected to actively participate in our daily lives. Once this time arrives, they will need to be socially accepted by people in various communities. However, the issues remain unknown that must be solved to make robots socially accepted. We have conducted a long-term experiment that a communication robot interacts people daily in an office. From the questionaire answers for the experiment, we have found that three fundamental issues, offering familiarity, reading the situation, and playing a social role, are required for a robot that is socially accepted. In this paper, we propose a hypothesis that three fundamental issues must be fulfilled by a socially accepted robot. Then, we tested our hypothesis through a six-week experiment in an office. We present the details of the experimental results and discuss what is the most important issue among them for a socially accepted robot.
-
日本ロボット学会誌 26(6) 485-492 2008年8月29日 査読有りHumans always sway their body when they are standing. Since the swaying is natural for human, they are not conscious of the swaying. However, today, almost all robots are designed to reduce the swaying to ensure stabilities. If communication robots can control the swaying appropriately, it might help humans to anthropomorphize the robot. In this paper, we evaluate how the swaying of a humanoid robot affects human swaying and their impressions. We measured human and robot's neck movement and asked them to answer a questionnaire after the experiments. We discovered that human swaying and impressions were affected by the robot's swaying. Human swaying is always observed whenever communications are performed. In order to apply the swaying to humanoid robots, we only use wheel control. Additionally, we can make the robot's swaying and do other actions on their upper bodies. So the result of this paper will explain how to make robots' swaying.
-
IEEE TRANSACTIONS ON ROBOTICS 24(4) 911-916 2008年8月 査読有り筆頭著者最終著者責任著者Human beings subconsciously adapt their behaviors to a communication partner in order to make interactions run smoothly. In human-robot interactions, not only the human but also the robot is expected to adapt to its partner. Thus, to facilitate human-robot interactions, a robot should be able to read subconscious comfort and discomfort signals from humans and adjust its behavior accordingly, just like a human would. However, most previous, research works expected the human to consciously give feedback, which might interfere with the aim of interaction. We propose an adaptation mechanism based on reinforcement learning that reads subconscious body signals from a human partner, and uses this information to adjust interaction distances, gaze meeting, and motion speed and timing in human-robot interactions. The mechanism uses gazing at the robot's face and human movement distance as subconscious body signals that indicate a human's comfort and discomfort. A pilot study with a humanoid robot that has ten interaction behaviors has been conducted. The study result of 12 subjects suggests that the proposed mechanism enables autonomous adaptation to individual preferences. Also, detailed discussion and conclusions are presented.
-
MM'08 - Proceedings of the 2008 ACM International Conference on Multimedia, with co-located Symposium and Workshops 793-796 2008年 査読有りIn this paper we propose an intuitive page-turning and browsing interface of e-books on a flexible e-paper based on user studies. Our user studies showed various types of page-turning actions such as flipping, grasping, and sliding by different situations or users. We categorized these actions into three categories: turn, flip through, and leaf through the page(s). Based on this categorized model, we have developed a conceptual design and prototype of an interface for an e-book reader, which enables intuitive page-turning interactions using a simple architecture in both hardware and software design. The prototype has a flexible plastic sheet with bend sensors, which is attached to a small LCD monitor to physically unite the visual display with a tangible control interface based on the natural page-turning actions as used in reading a real book. The prototype handles all three page-turning actions observed in the user studies by interpreting the bend degree of the sheet.Copyright 2008 ACM.
-
2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, September 22-26, 2008, Acropolis Convention Center, Nice, France 3336-3343 2008年 査読有り筆頭著者最終著者責任著者
-
2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, September 22-26, 2008, Acropolis Convention Center, Nice, France 2637-2642 2008年 査読有り
-
SIGGRAPH2008 Poster, B134-2 pages 2008年 査読有り
-
日本ロボット学会誌 25(6) 822-833 2007年9月15日 査読有り筆頭著者最終著者責任著者We believe that humanoids will take an active part in our daily lives in the near future as important media, since a human-size humanoid enables people to more easily accept a sense of humanlike expressions or emotions, wishes, and so forth in addition to capabilities of other media, such as the ability to collect and provide information from the Internet and via ubiquitous sensors connected through a network. Once this time arrives, humanoids will be expected to have the social skills necessary for interacting with people in addition to the ability to carry out their own tasks. We call an interaction that increases familiarity and makes communication smoother a “social interaction.” We have recently developed a human-size humanoid, called “Robovie-IV.” Robovie-IV features interaction abilities such as the ability to chat vocally, which are intended to be “social interactions.” Using Robovie-IV we have conducted an experiment in which it interacts with people in an everyday office environment. This paper discusses the design requirements of Robovie-IV and introduces an overview of its hardware and software architectures. Then, the experimental results, discussions, and conclusions are presented.
-
日本ロボット学会誌 24(7) 820-829 2006年10月15日 査読有り筆頭著者最終著者責任著者When humans interact in a social context, there are many factors apart from the actual communication that need to be considered. Previous studies in behavioral sciences have shown that there is a need for a certain amount of personal space and that different people tend to meet the gaze of others to different extents. For humans, this is mostly subconscious, but when two persons interact, there is an automatic adjustment of these factors to avoid discomfort. In this paper we propose an adaptation mechanism for robot behaviors to make human-robot interactions run more smoothly. We propose such a mechanism based on policy gradient reinforcement learning, that reads minute body signals from a human partner, and uses this information to adjust interaction distances, gaze meeting, and motion speed and timing in human-robot interaction. We show that this enables autonomous adaptation to individual preferences by the experiment with twelve subjects.
-
AUTONOMOUS ROBOTS 21(1) 3-14 2006年8月 査読有りMost current mobile robots are designed to determine their actions according to their positions. Before making a decision, they need to localize themselves. Thus, their observation strategies are mainly for self-localization. However, observation strategies should not only be for self-localization but also for decision making. We propose an observation strategy that enables a mobile robot to make a decision. It enables a robot equipped with a limited viewing angle camera to make decisions without self-localization. A robot can make a decision based on a decision tree and on prediction trees of observations constructed from its experiences. The trees are constructed based on an information criterion for the action decision, not for self-localization or state estimation. The experimental results with a four legged robot are shown and discussed.
-
Proc. of the 36th International Symposium on Robotics(ISR2005) 2005年11月 査読有り
-
2005 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4 1594-1601 2005年 査読有りIn this paper we propose an adaptation mechanism for robot behaviors to make robot-human interactions run more smoothly. We propose such a mechanism based on reinforcement learning, which reads minute body signals from a human partner, and uses this information to adjust interaction distances, gaze meeting, and motion speed and timing in human-robot interaction. We show that this enables autonomous adaptation to individual preferences by an experiment with twelve subjects.
-
ADVANCED ROBOTICS 19(2) 207-218 2005年 査読有りThis paper proposes a subjective map representation that enables a robot in a multi-agent system to make decisions in a dynamic, hostile environment. A typical situation can be found in the Sony four-legged robot league of the RoboCup competition. The subjective map is a map of the environment that each agent maintains regardless of the objective consistency of the representation among the agents. Due to the map's subjectivity, it is not affected by incorrect information acquired by other agents. The method is compared with conventional methods with or without information sharing.
-
日本ロボット学会誌 21(7) 819-827 2003年10月15日 査読有り筆頭著者最終著者責任著者Visual attention is one of the most important issues for a vision guided mobile robot. Methods have been proposed for visual attention control based on information criterion [3] [9] . However, the robot had to stop walking for observation and decision. This paper presents a method which enables observation and decision more efficiently and adaptively while it is walking. The method uses the expected information gain from future observations for attention control and action decision. It also proposes an image compensation method to handle the image changes due to the robot motion. Both are used to estimate observation probabilities from the observation while it is walking and then action probabilities are estimated from a decision tree based on the information criterion. The method is applied to a four legged robot. Discussions on the visual attention control in the method and the future issues are given.
-
IROS 2003: PROCEEDINGS OF THE 2003 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4 291-296 2003年 査読有りThis paper proposes a subjective map representation that enables a multiagent system to make decisions in a dynamic, hostile environment. A typical situation can be found in the Sony four-legged robot league of the RoboCup competition [1]. The subjective map is a map of the environment that each agent maintains regardless of the objective consistency of the representation among the agents. Owing to the map's subjectivity, it is not affected by incorrect information belonging to other agents. For example, it is not affected by non-negligible errors caused by dynamic changes in the environment, such as falling down or being picked up and brought to other places by the referee. A potential field is defined on the subjective map in terms of subtasks, such as approaching and shooting the ball, and the field is dynamically updated so that the robot can decide what to do next. This methods is compared with conventional methods that involve sharing or not sharing information.
-
日本ロボット学会誌 20(7) 751-758 2002年10月15日 査読有り筆頭著者最終著者責任著者Visual attention is one of the most important issues for a mobile robot to accomplish a given task in complicated environments since the vision sensors bring a huge amount of data. This paper proposes a method of sensor space segmentation for visual attention control that enables efficient observation taking the time needed for observation into account. The efficiency is considered from a viewpoint of not geometrical reconstruction but unique action selection based on information criterion regardless of localization uncertainty. The method is applied to a four legged robot that tries to shoot a ball into the goal. To build a decision tree, a training set is given by the designer, and a kind of off-line learning is performed on the given data set. Discussion on the visual attention control in the method is given and the future issues are shown.
-
2002 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-3, PROCEEDINGS 244-249 2002年 査読有りVisual attention is one of the most important issues for a vision guided mobile robot. Methods have been proposed for visual attention control based on information criterion[3, 4]. However, the robot had to stop walking for observation and decision. This paper presents a method which enables observation and decision more efficiently and adaptively while it is walking. The method uses the expected information gain from future observations for attention control and action decision. It also proposes an image compensation method to handle the image changes due to the robot motion. Both are used to estimate observation probabilities from the observation while it is walking and then action probabilities are estimated from a decision tree based on the information criterion. The Method is applied to a four legged robot. Discussions on the visual attention control in the method and the future issues are given.
-
日本ロボット学会誌 19(6) 793-800 2001年9月15日 査読有り筆頭著者最終著者責任著者This paper proposes a method for constructing a decision tree and prediction ones of the landmarks that enable a robot with a limited visual angle to make decisions without self-localization in the environment. Since global positioning from the 3-D reconstruction of landmarks is generally time-consuming and prone to errors, the robot makes decisions depending on the appearance of landmarks. By using the decision and the prediction trees based on information criterion, the robot can achieve the task efficiently.
-
IROS 2001: PROCEEDINGS OF THE 2001 IEEE/RJS INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4 1714-1719 2001年 査読有りVisual attention is one of the most important issues for a vision guided mobile robot not simply because visual information bring a huge amount of data but also because the visual field is limited, therefore gaze control is necessary. This paper proposes a method of sensor space segmentation for visual attention control that enables mobile robots to realize efficient observation. The efficiency is considered from a viewpoint of not geometrical reconstruction but unique action selection based on information criterion regardless of localization uncertainty. The method builds a decision tree based on the information criterion while taking the time needed for observation into account, and attention control is done by following the tree. The tree is rebuilt by introducing contextual information for more efficient attention control. The method is applied to a four legged robot that tries to shoot a ball into the goal. Discussion on the visual attention control in the method is given and the future issues are shown.
-
2000 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS 2000), VOLS 1-3, PROCEEDINGS 1038-1043 2000年 査読有りSelf localization seems necessary for mobile robot navigation. The conventional method such as geometric reconstruction from landmark observations is generally time-consuming and prone to errors. This paper proposes a method which constructs a decision tree and prediction trees of the landmark appearance that enable a mobile robot with ct limited visual angle to observe efficiently and make decisions without global positioning in the environment. By constructing these trees based on information criterion, the robot can accomplish the given task efficiently. The validity of the method is shown with a four legged robot.
MISC
14-
大阪教育大学紀要. 第3部門, 自然科学・応用科学 64(2) 41-54 2016年2月29日マイコンは身近な家電を含め多くの工業製品で活用されるだけでなく,計算機と実世界との接点であるセンサやアクチュエータとの接続が容易なものが多く,インタラクションデザインの教育・研究や中学校の技術科といった工学系以外の分野の教育での利用が拡大している。一方で,タブレット端末やスマートフォンなど小型で安価な端末が普及しつつある。そういった端末でマイコンのプログラミングが容易に出来れば,より利用の敷居が下がると考えられる。ところで,そういった端末は計算機資源が限られている。しかし,インタプリタをマイコンへ載せ,端末側にはコンパイラを持たず,エディタやデバッグに必要なもののみとすれば,計算機資源が限られていても問題がない。そこで本研究では,マイコン上で動作し対話的にプログラミングができるインタプリタ言語iArduino,プログラムの動作と入出力を可視化するプログラミング環境としてPC上で動作するiArduinoTerminalとタブレット端末で動作するiArduinoTermianl for Androidを開発したので報告する。We have developed an interpreted language for a beginner to use understanding programming and electronic circuits and making his/her creation. Its interpreter runs on Arduino's micro controller and its grammar resembles to Arduino language(C/C++ programming language). It also has built in debugging interface to visualize pin values and variables. In this paper, we report the implementation of the language and a development environment on PC(iArduinoTermianl)and Android Tablet(iArduinoTerminal for Android). The language and the tool could be used in education course for micro controller programming to help students.
-
研究報告コンピュータと教育(CE) 2015(8) 1-6 2015年2月7日近年,タブレット端末が広く普及してきており,パソコンではなくタブレット端末のみを保有する層も増加している.したがって,マイコンのプログラムを作成できるタブレット端末用ビジュアルプログラミング環境があれば,マイコンを使った電子工作を始めやすくなる場合も増えると考える.しかし,そのような環境は見当たらない.また,マイコンにインタプリタを載せると,タブレット端末のような資源の限られたコンピュータでも,プログラミング環境を実現しやすい.そこで,iArduino インタプリタの動作するマイコンのプログラムを作成する,タブレット端末用ビジュアルプログラミング環境 aiBlocks を開発したので報告し,製作例としてライントレースカーを紹介する.
-
大阪教育大学紀要. 第5部門, 教科教育 = Memoirs of Osaka Kyoiku University 62(2) 63-70 2014年2月子どもたちのプログラミング経験はコンピュータを理解し活用するだけでなく,他の学びにもつながると期待されている。またロボットを操縦するのではなく,コンピュータに動作を決めさせるロボカップジュニアの大会が開催され,多くの子どもたちが参加している。そこで,読者がロボカップジュニアサッカーチャレンジに参加することを目標とし,ロボットのプログラミングを通して外界への関心を高められるように配慮した,ロボットプログラミングのテキストを試作したので報告する。まずテキストの製作方針と構成を紹介し,ロボットスクール受講生による評価を示す。受講生からは後輩のために「あるといい」という回答を得ている。It is expected that children learn a lot of stuffs when they accomplish their own goals through programming. RoboCupJunior is an educational activity for students up through age 19. In RoboCupJunior events, children compete with their own robots which are prepared and programmed by themselves. Then we developed a prototype of a programming text. The aim of the text is to help children to prepare for participating RoboCupJunior soccer competitions in programming side. We report how we developed the text and its evaluation by children who participate in a robot school. They assessed that the book is beneficial for their junior fellows.
-
研究報告コンピュータと教育(CE) 2013(8) 1-4 2013年3月8日マイコンは身近な家電を含め多くの工業製品で活用されるだけでなく,計算機と実世界との接点であるセンサやアクチュエータとの接続が容易なものが多く,インタラクションデザインの教育・研究や中学校の技術科といった工学系以外の分野の教育での利用が拡大している.一方で,タブレット端末やスマートフォンなど小型で安価な端末が普及しつつある.そういった端末でマイコンのプログラミングが容易に出来れば,より利用の敷居が下がると考えられる.ところで,そういった端末は計算機資源が限られている.しかし,インタプリタをマイコンへ載せ,端末側にはコンパイラを持たず,エディタやデバッグに必要なもののみとすれば,計算機資源が限られていても問題がない.そこで本研究では,タブレット端末で動作する,インタプリタ型言語搭載マイコンのプログラミング環境を開発したので報告する.
-
研究報告ユビキタスコンピューティングシステム(UBI) 2012(8) 1-6 2012年5月10日本論文では,マイコンを利用した製作の初心者へ向けて,マイコン上で動作し対話的にプログラミングができるインタプリタ言語iArduinoと,プログラム動作と入出力を可視化するツールiArduinoTerminalを作成したので,その設計方針と実装結果を紹介し議論する.This paper reports an implementation of an interpreter "iArduino" which runs on a micro-controller and visualization tool "iArduinoTerminal" for it. We develop iArduino and iArduinoTerminal to help beginners understand programming and electronic circuits.
書籍等出版物
22講演・口頭発表等
41担当経験のある科目(授業)
13-
技術科教育特論II-A (大阪教育大学大学院)
-
技術科教育特論II-B (大阪教育大学大学院)
-
技術科教育特論I-B (大阪教育大学大学院)
-
ものづくり教育実践 (大阪教育大学大学院)
-
ものづくり教育実践演習 (大阪教育大学大学院)
所属学協会
4共同研究・競争的資金等の研究課題
2-
日本学術振興会 科学研究費助成事業 若手研究(B) 2013年4月 - 2016年3月
-
Funded Research