Curriculum Vitaes

Noriaki Mitsunaga

  (光永 法明)

Profile Information

Affiliation
Associate Professor, Division of Math, Sciences, and Information Technology in Education, Osaka Kyoiku University
Degree
修士(工学)(大阪大学大学院工学研究科)
博士(工学)(大阪大学大学院工学研究科)

J-GLOBAL ID
200901077951261581
researchmap Member ID
1000244783

External link

Papers

 32
  • MATSUMOTO, Katsura, MITSUNAGA, Noriaki
    Memoirs of Osaka Kyoiku University. Humanities and Social Science, Natural Science, 72 57-66, Feb 29, 2024  Peer-reviewed
  • 光永 法明, 飯田 彩楓, 岩井 俊也
    大阪教育大学紀要. 総合教育科学, 71 497-506, Feb 28, 2023  Lead authorLast authorCorresponding author
  • 光永 法明
    大阪教育大学紀要. 総合教育科学, 70 395-404, Feb 28, 2022  Lead authorLast authorCorresponding author
  • Kana Ioku, Noriaki MITSUNAGA, Yasuo Tohda, Masatsugu Taneda
    Memoirs of Osaka Kyoiku University. Humanities and Social Science, Natural Science, 68 149-155, Feb 29, 2020  Peer-reviewed
    The scientifical experiment teaching materials handled at school have a lot of definition reactions and have few quantity experiments. But it is important to raise the thinking power and be considered from an obtained experimental result. In this study, the performance of a simple handmade colorimeter which can be easily used as the teaching materials was estimated by determination of content of starch in powder sugar. Our colorimeter was composed of LED and photoresistor. Three devices were constructed and used for the quantitative analysis. The obtained values of the content of starch were compared each other and no significant error was observed. The amount of starch in commercial powder sugar was almost the same as the value determined by UV-visible spectrophotometer. Therefore, the simple handmade colorimeter was useful for the class of the science and the home economics at school.
  • 光永 法明, 奈良 明香
    日本産業技術教育学会誌, 60(4) 181-190, Dec, 2018  Peer-reviewedLead authorLast authorCorresponding author
  • 光永 法明, 井芹 威晴, 吉田 図夢
    情報処理学会論文誌教育とコンピュータ(TCE), 3(1) 53-63, Feb 22, 2017  Peer-reviewedLead authorLast authorCorresponding author
    タブレット端末が普及するにともない,パソコンではなくタブレット上でのプログラミング環境も増えつつある.最近ではマイクロコントローラあるいはマイクロコンピュータを省略してマイコンとよばれる1チップから数チップで計算機を構築できるコンピュータが,家電,車といった製品の中だけでなく,趣味の電子工作や学校でのプログラミングの学習にも使われている.しかし,そのプログラミングに使えるタブレット端末上のビジュアルプログラミング環境はみあたらない.もし,そのような環境があればマイコンや電子工作に親しむ層が増えると期待される.ところで,マイコンにインタプリタを載せればプログラミング環境にコンパイラは不要であり,デバッグ情報をインタプリタから提供すれば,タブレット端末のような資源の限られたコンピュータでもプログラムの動作状態やマイコンの入出力といった情報を表示するプログラミング環境を実現しやすい.そこで,マイコンにインタプリタを搭載し,そのプログラムを開発するタブレット端末用ビジュアルプログラミング環境aiBlocksを実現したので,初めてプログラミングと電子工作に取り組んだ初心者によるaiBlocksの評価とともに報告する.In recent years, tablet PCs are becoming popular and number of visual programming environments on them are increasing. Meanwhile, micro controllers, small computers that consist of single or a few ICs, are becoming popular among hobyists and programming education. We beleive that a visual programming environment to build a micro controller program on a tablet PC makes more pepole to familar with micro controllers and electronical circuits. However, no such visual programming environment on tablet PC has been published as far as we know. Then, we have developed such an environment named "aiBlocks" on Android OS. In this paper, we report how we developed aiBlocks and how beginners used it.
  • Mitsunaga, N., Smith, C., K, a, T., Ishiguro, H., Hagita, N.
    Human-Robot Interaction in Social Robotics, 2017  
  • 光永法明
    デジタルプラクティス, 6(2) 123-128, Apr 15, 2015  Peer-reviewedLead authorLast authorCorresponding author
    学校での理科教育では棒温度計や可動コイル形電圧計・電流計といった計器が広く用いられている.一方で表示自由度の高い液晶等の表示パネルの価格が低下し利用しやすくなったことから,そういった計器の表示の置き換えが学校外では進んでいる.現在,タブレット端末が学校へも普及しつつある.タブレット端末の表示自由度は従来の計器よりも高い.本研究では,その特長を活かし,数値,アナログメータ,棒温度計を模擬,グラフ(オシロスコープ)形式による表示を実現するiTesterを開発したので報告する.本報告では,まずiTesterの開発経緯を述べる.そして,実際の小中学校での利用結果から,表示器としてのタブレット端末の利用が1)視認性の面で従来の計器よりも学校教育現場に適していること,2)従来より精度の高い測定の可能性と,3)教育の質的な変化をもたらす可能性を報告し,支援学校での利用結果から,4)計器のユニバーサルデザインの実現について報告する.
  • KAKIO Masayuki, MIYASHITA Takahiro, MITSUNAGA Noriaki, ISHIGURO Hiroshi, HAGITA Norihiro
    Journal of the Robotics Society of Japan, 28(9) 1110-1119, Nov 15, 2010  Peer-reviewed
    In this paper, we report the importance of the reactive behaviors of humanoid robots against human actions for smooth communication. We hypothesize that the reactive behaviors of robots play an important role in achieving human-like communication between humans and robots since the latter need to be recognized by the former as communication partners. To evaluate this hypothesis, we conducted psychological experiments in which we presented subjects with four types of reactive behaviors resulting from pushing a wheeled inverted-pendulum-type humanoid robot. From the experiment, we found that subject's impressions to the robot regarding extroversion and neuroticism changed by the robot's reactive behaviors. We also discuss the reasons for such changes in impressions by comparing the robot's and human reactive behavior.
  • MITSUNAGA Noriaki, MIYASHITA Zenta, SHINOZAWA Kazuhiko, MIYASHITA Takahiro, ISHIGURO Hiroshi, HAGITA Norihiro
    Journal of the Robotics Society of Japan, 26(7) 812-820, Oct 15, 2008  Peer-reviewedLead authorLast authorCorresponding author
    In the near future, robots are expected to actively participate in our daily lives. Once this time arrives, they will need to be socially accepted by people in various communities. However, the issues remain unknown that must be solved to make robots socially accepted. We have conducted a long-term experiment that a communication robot interacts people daily in an office. From the questionaire answers for the experiment, we have found that three fundamental issues, offering familiarity, reading the situation, and playing a social role, are required for a robot that is socially accepted. In this paper, we propose a hypothesis that three fundamental issues must be fulfilled by a socially accepted robot. Then, we tested our hypothesis through a six-week experiment in an office. We present the details of the experimental results and discuss what is the most important issue among them for a socially accepted robot.
  • KAKIO Masayuki, MIYASHITA Takahiro, MITSUNAGA Noriaki, ISHIGURO Hiroshi, HAGITA Norihiro
    Journal of the Robotics Society of Japan, 26(6) 485-492, Aug 29, 2008  Peer-reviewed
    Humans always sway their body when they are standing. Since the swaying is natural for human, they are not conscious of the swaying. However, today, almost all robots are designed to reduce the swaying to ensure stabilities. If communication robots can control the swaying appropriately, it might help humans to anthropomorphize the robot. In this paper, we evaluate how the swaying of a humanoid robot affects human swaying and their impressions. We measured human and robot's neck movement and asked them to answer a questionnaire after the experiments. We discovered that human swaying and impressions were affected by the robot's swaying. Human swaying is always observed whenever communications are performed. In order to apply the swaying to humanoid robots, we only use wheel control. Additionally, we can make the robot's swaying and do other actions on their upper bodies. So the result of this paper will explain how to make robots' swaying.
  • Noriaki Mitsunaga, Christian Smith, Takayuki Kanda, Hiroshi Ishiguro, Norihiro Hagita
    IEEE TRANSACTIONS ON ROBOTICS, 24(4) 911-916, Aug, 2008  Peer-reviewedLead authorLast authorCorresponding author
    Human beings subconsciously adapt their behaviors to a communication partner in order to make interactions run smoothly. In human-robot interactions, not only the human but also the robot is expected to adapt to its partner. Thus, to facilitate human-robot interactions, a robot should be able to read subconscious comfort and discomfort signals from humans and adjust its behavior accordingly, just like a human would. However, most previous, research works expected the human to consciously give feedback, which might interfere with the aim of interaction. We propose an adaptation mechanism based on reinforcement learning that reads subconscious body signals from a human partner, and uses this information to adjust interaction distances, gaze meeting, and motion speed and timing in human-robot interactions. The mechanism uses gazing at the robot's face and human movement distance as subconscious body signals that indicate a human's comfort and discomfort. A pilot study with a humanoid robot that has ten interaction behaviors has been conducted. The study result of 12 subjects suggests that the proposed mechanism enables autonomous adaptation to individual preferences. Also, detailed discussion and conclusions are presented.
  • Taichi Tajika, Tomoko Yonezawa, Noriaki Mitsunaga
    MM'08 - Proceedings of the 2008 ACM International Conference on Multimedia, with co-located Symposium and Workshops, 793-796, 2008  Peer-reviewed
    In this paper we propose an intuitive page-turning and browsing interface of e-books on a flexible e-paper based on user studies. Our user studies showed various types of page-turning actions such as flipping, grasping, and sliding by different situations or users. We categorized these actions into three categories: turn, flip through, and leaf through the page(s). Based on this categorized model, we have developed a conceptual design and prototype of an interface for an e-book reader, which enables intuitive page-turning interactions using a simple architecture in both hardware and software design. The prototype has a flexible plastic sheet with bend sensors, which is attached to a small LCD monitor to physically unite the visual display with a tangible control interface based on the natural page-turning actions as used in reading a real book. The prototype handles all three page-turning actions observed in the user studies by interpreting the bend degree of the sheet.Copyright 2008 ACM.
  • Noriaki Mitsunaga, Zenta Miyashita, Kazuhiko Shinozawa, Takahiro Miyashita, Hiroshi Ishiguro, Norihiro Hagita
    2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, September 22-26, 2008, Acropolis Convention Center, Nice, France, 3336-3343, 2008  Peer-reviewedLead authorLast authorCorresponding author
  • Shuichi Nishio, Norihiro Hagita, Takahiro Miyashita, Takayuki Kanda, Noriaki Mitsunaga, Masahiro Shiomi, Tatsuya Yamazaki
    2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, September 22-26, 2008, Acropolis Convention Center, Nice, France, 2637-2642, 2008  Peer-reviewed
  • Tomoko Yonezawa, Noriaki Mitsunaga, Taichi Tajika, Takahiro Miyashita, Shinji Abe
    SIGGRAPH2008, Poster, B134-2 pages, 2008  Peer-reviewed
  • MITSUNAGA Noriaki, MIYASHITA Zenta, MIYASHITA Takahiro, ISHIGURO Hiroshi, HAGITA Norihiro
    Journal of the Robotics Society of Japan, 25(6) 822-833, Sep 15, 2007  Peer-reviewedLead authorLast authorCorresponding author
    We believe that humanoids will take an active part in our daily lives in the near future as important media, since a human-size humanoid enables people to more easily accept a sense of humanlike expressions or emotions, wishes, and so forth in addition to capabilities of other media, such as the ability to collect and provide information from the Internet and via ubiquitous sensors connected through a network. Once this time arrives, humanoids will be expected to have the social skills necessary for interacting with people in addition to the ability to carry out their own tasks. We call an interaction that increases familiarity and makes communication smoother a “social interaction.” We have recently developed a human-size humanoid, called “Robovie-IV.” Robovie-IV features interaction abilities such as the ability to chat vocally, which are intended to be “social interactions.” Using Robovie-IV we have conducted an experiment in which it interacts with people in an everyday office environment. This paper discusses the design requirements of Robovie-IV and introduces an overview of its hardware and software architectures. Then, the experimental results, discussions, and conclusions are presented.
  • Noriaki Mitsunaga, Emma Svienstins, Takayuki Kanda, Hiroshi Ishiguro, Norihiro Hagita
    Mar, 2007  Peer-reviewedLead authorLast authorCorresponding author
  • Masayuki Kakio, Takahiro Miyashita, Noriaki Mitsunaga, Hiroshi Ishiguro, Norihiro Hagita
    Dec, 2006  Peer-reviewed
  • MITSUNAGA Noriaki, SMITH Christian, KANDA Takayuki, ISHIGURO Hiroshi, HAGITA Norihiro
    Journal of the Robotics Society of Japan, 24(7) 820-829, Oct 15, 2006  Peer-reviewedLead authorLast authorCorresponding author
    When humans interact in a social context, there are many factors apart from the actual communication that need to be considered. Previous studies in behavioral sciences have shown that there is a need for a certain amount of personal space and that different people tend to meet the gaze of others to different extents. For humans, this is mostly subconscious, but when two persons interact, there is an automatic adjustment of these factors to avoid discomfort. In this paper we propose an adaptation mechanism for robot behaviors to make human-robot interactions run more smoothly. We propose such a mechanism based on policy gradient reinforcement learning, that reads minute body signals from a human partner, and uses this information to adjust interaction distances, gaze meeting, and motion speed and timing in human-robot interaction. We show that this enables autonomous adaptation to individual preferences by the experiment with twelve subjects.
  • Noriaki Mitsunaga, Takahiro Miyashita, Hiroshi Ishiguro, Kiyoshi Kogure, Norihiro Hagita
    Oct, 2006  
  • Noriaki Mitsunaga, Minoru Asada
    AUTONOMOUS ROBOTS, 21(1) 3-14, Aug, 2006  Peer-reviewed
    Most current mobile robots are designed to determine their actions according to their positions. Before making a decision, they need to localize themselves. Thus, their observation strategies are mainly for self-localization. However, observation strategies should not only be for self-localization but also for decision making. We propose an observation strategy that enables a mobile robot to make a decision. It enables a robot equipped with a limited viewing angle camera to make decisions without self-localization. A robot can make a decision based on a decision tree and on prediction trees of observations constructed from its experiences. The trees are constructed based on an information criterion for the action decision, not for self-localization or state estimation. The experimental results with a four legged robot are shown and discussed.
  • Kazuhiko Shinozawa, Takahiro Miyashita, Noriaki Mitsunaga, Ren Ohmura, Norihiro Hagita
    Proc. of the 36th International Symposium on Robotics(ISR2005), Nov, 2005  Peer-reviewed
  • N Mitsunaga, C Smith, T Kanda, H Ishiguro, N Hagita
    2005 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, 1594-1601, 2005  Peer-reviewed
    In this paper we propose an adaptation mechanism for robot behaviors to make robot-human interactions run more smoothly. We propose such a mechanism based on reinforcement learning, which reads minute body signals from a human partner, and uses this information to adjust interaction distances, gaze meeting, and motion speed and timing in human-robot interaction. We show that this enables autonomous adaptation to individual preferences by an experiment with twelve subjects.
  • N Mitsunaga, T Izumi, M Asada
    ADVANCED ROBOTICS, 19(2) 207-218, 2005  Peer-reviewed
    This paper proposes a subjective map representation that enables a robot in a multi-agent system to make decisions in a dynamic, hostile environment. A typical situation can be found in the Sony four-legged robot league of the RoboCup competition. The subjective map is a map of the environment that each agent maintains regardless of the objective consistency of the representation among the agents. Due to the map's subjectivity, it is not affected by incorrect information acquired by other agents. The method is compared with conventional methods with or without information sharing.
  • MITSUNAGA Noriaki, ASADA Minoru
    Journal of the Robotics Society of Japan, 21(7) 819-827, Oct 15, 2003  Peer-reviewedLead authorLast authorCorresponding author
    Visual attention is one of the most important issues for a vision guided mobile robot. Methods have been proposed for visual attention control based on information criterion [3] [9] . However, the robot had to stop walking for observation and decision. This paper presents a method which enables observation and decision more efficiently and adaptively while it is walking. The method uses the expected information gain from future observations for attention control and action decision. It also proposes an image compensation method to handle the image changes due to the robot motion. Both are used to estimate observation probabilities from the observation while it is walking and then action probabilities are estimated from a decision tree based on the information criterion. The method is applied to a four legged robot. Discussions on the visual attention control in the method and the future issues are given.
  • N Mitsunaga, T Izumi, M Asada
    IROS 2003: PROCEEDINGS OF THE 2003 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, 291-296, 2003  Peer-reviewed
    This paper proposes a subjective map representation that enables a multiagent system to make decisions in a dynamic, hostile environment. A typical situation can be found in the Sony four-legged robot league of the RoboCup competition [1]. The subjective map is a map of the environment that each agent maintains regardless of the objective consistency of the representation among the agents. Owing to the map's subjectivity, it is not affected by incorrect information belonging to other agents. For example, it is not affected by non-negligible errors caused by dynamic changes in the environment, such as falling down or being picked up and brought to other places by the referee. A potential field is defined on the subjective map in terms of subtasks, such as approaching and shooting the ball, and the field is dynamically updated so that the robot can decide what to do next. This methods is compared with conventional methods that involve sharing or not sharing information.
  • MITSUNAGA Noriaki, ASADA Minoru
    Journal of the Robotics Society of Japan, 20(7) 751-758, Oct 15, 2002  Peer-reviewedLead authorLast authorCorresponding author
    Visual attention is one of the most important issues for a mobile robot to accomplish a given task in complicated environments since the vision sensors bring a huge amount of data. This paper proposes a method of sensor space segmentation for visual attention control that enables efficient observation taking the time needed for observation into account. The efficiency is considered from a viewpoint of not geometrical reconstruction but unique action selection based on information criterion regardless of localization uncertainty. The method is applied to a four legged robot that tries to shoot a ball into the goal. To build a decision tree, a training set is given by the designer, and a kind of off-line learning is performed on the given data set. Discussion on the visual attention control in the method is given and the future issues are shown.
  • N Mitsunaga, M Asada
    2002 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-3, PROCEEDINGS, 244-249, 2002  Peer-reviewed
    Visual attention is one of the most important issues for a vision guided mobile robot. Methods have been proposed for visual attention control based on information criterion[3, 4]. However, the robot had to stop walking for observation and decision. This paper presents a method which enables observation and decision more efficiently and adaptively while it is walking. The method uses the expected information gain from future observations for attention control and action decision. It also proposes an image compensation method to handle the image changes due to the robot motion. Both are used to estimate observation probabilities from the observation while it is walking and then action probabilities are estimated from a decision tree based on the information criterion. The Method is applied to a four legged robot. Discussions on the visual attention control in the method and the future issues are given.
  • MITSUNAGA Noriaki, ASADA Minoru
    Journal of the Robotics Society of Japan, 19(6) 793-800, Sep 15, 2001  Peer-reviewedLead authorLast authorCorresponding author
    This paper proposes a method for constructing a decision tree and prediction ones of the landmarks that enable a robot with a limited visual angle to make decisions without self-localization in the environment. Since global positioning from the 3-D reconstruction of landmarks is generally time-consuming and prone to errors, the robot makes decisions depending on the appearance of landmarks. By using the decision and the prediction trees based on information criterion, the robot can achieve the task efficiently.
  • N Mitsunaga, M Asada
    IROS 2001: PROCEEDINGS OF THE 2001 IEEE/RJS INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, 1714-1719, 2001  Peer-reviewed
    Visual attention is one of the most important issues for a vision guided mobile robot not simply because visual information bring a huge amount of data but also because the visual field is limited, therefore gaze control is necessary. This paper proposes a method of sensor space segmentation for visual attention control that enables mobile robots to realize efficient observation. The efficiency is considered from a viewpoint of not geometrical reconstruction but unique action selection based on information criterion regardless of localization uncertainty. The method builds a decision tree based on the information criterion while taking the time needed for observation into account, and attention control is done by following the tree. The tree is rebuilt by introducing contextual information for more efficient attention control. The method is applied to a four legged robot that tries to shoot a ball into the goal. Discussion on the visual attention control in the method is given and the future issues are shown.
  • N Mitsunaga, M Asada
    2000 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS 2000), VOLS 1-3, PROCEEDINGS, 1038-1043, 2000  Peer-reviewed
    Self localization seems necessary for mobile robot navigation. The conventional method such as geometric reconstruction from landmark observations is generally time-consuming and prone to errors. This paper proposes a method which constructs a decision tree and prediction trees of the landmark appearance that enable a mobile robot with ct limited visual angle to observe efficiently and make decisions without global positioning in the environment. By constructing these trees based on information criterion, the robot can accomplish the given task efficiently. The validity of the method is shown with a four legged robot.

Misc.

 14
  • 光永 法明
    大阪教育大学紀要. 第3部門, 自然科学・応用科学, 64(2) 41-54, Feb 29, 2016  
    マイコンは身近な家電を含め多くの工業製品で活用されるだけでなく,計算機と実世界との接点であるセンサやアクチュエータとの接続が容易なものが多く,インタラクションデザインの教育・研究や中学校の技術科といった工学系以外の分野の教育での利用が拡大している。一方で,タブレット端末やスマートフォンなど小型で安価な端末が普及しつつある。そういった端末でマイコンのプログラミングが容易に出来れば,より利用の敷居が下がると考えられる。ところで,そういった端末は計算機資源が限られている。しかし,インタプリタをマイコンへ載せ,端末側にはコンパイラを持たず,エディタやデバッグに必要なもののみとすれば,計算機資源が限られていても問題がない。そこで本研究では,マイコン上で動作し対話的にプログラミングができるインタプリタ言語iArduino,プログラムの動作と入出力を可視化するプログラミング環境としてPC上で動作するiArduinoTerminalとタブレット端末で動作するiArduinoTermianl for Androidを開発したので報告する。We have developed an interpreted language for a beginner to use understanding programming and electronic circuits and making his/her creation. Its interpreter runs on Arduino's micro controller and its grammar resembles to Arduino language(C/C++ programming language). It also has built in debugging interface to visualize pin values and variables. In this paper, we report the implementation of the language and a development environment on PC(iArduinoTermianl)and Android Tablet(iArduinoTerminal for Android). The language and the tool could be used in education course for micro controller programming to help students.
  • 井芹 威晴, 光永 法明
    研究報告コンピュータと教育(CE), 2015(8) 1-6, Feb 7, 2015  
    近年,タブレット端末が広く普及してきており,パソコンではなくタブレット端末のみを保有する層も増加している.したがって,マイコンのプログラムを作成できるタブレット端末用ビジュアルプログラミング環境があれば,マイコンを使った電子工作を始めやすくなる場合も増えると考える.しかし,そのような環境は見当たらない.また,マイコンにインタプリタを載せると,タブレット端末のような資源の限られたコンピュータでも,プログラミング環境を実現しやすい.そこで,iArduino インタプリタの動作するマイコンのプログラムを作成する,タブレット端末用ビジュアルプログラミング環境 aiBlocks を開発したので報告し,製作例としてライントレースカーを紹介する.
  • 光永 法明, 山形 慎平
    大阪教育大学紀要. 第5部門, 教科教育 = Memoirs of Osaka Kyoiku University, 62(2) 63-70, Feb, 2014  
    子どもたちのプログラミング経験はコンピュータを理解し活用するだけでなく,他の学びにもつながると期待されている。またロボットを操縦するのではなく,コンピュータに動作を決めさせるロボカップジュニアの大会が開催され,多くの子どもたちが参加している。そこで,読者がロボカップジュニアサッカーチャレンジに参加することを目標とし,ロボットのプログラミングを通して外界への関心を高められるように配慮した,ロボットプログラミングのテキストを試作したので報告する。まずテキストの製作方針と構成を紹介し,ロボットスクール受講生による評価を示す。受講生からは後輩のために「あるといい」という回答を得ている。It is expected that children learn a lot of stuffs when they accomplish their own goals through programming. RoboCupJunior is an educational activity for students up through age 19. In RoboCupJunior events, children compete with their own robots which are prepared and programmed by themselves. Then we developed a prototype of a programming text. The aim of the text is to help children to prepare for participating RoboCupJunior soccer competitions in programming side. We report how we developed the text and its evaluation by children who participate in a robot school. They assessed that the book is beneficial for their junior fellows.
  • 光永法明
    研究報告コンピュータと教育(CE), 2013(8) 1-4, Mar 8, 2013  
    マイコンは身近な家電を含め多くの工業製品で活用されるだけでなく,計算機と実世界との接点であるセンサやアクチュエータとの接続が容易なものが多く,インタラクションデザインの教育・研究や中学校の技術科といった工学系以外の分野の教育での利用が拡大している.一方で,タブレット端末やスマートフォンなど小型で安価な端末が普及しつつある.そういった端末でマイコンのプログラミングが容易に出来れば,より利用の敷居が下がると考えられる.ところで,そういった端末は計算機資源が限られている.しかし,インタプリタをマイコンへ載せ,端末側にはコンパイラを持たず,エディタやデバッグに必要なもののみとすれば,計算機資源が限られていても問題がない.そこで本研究では,タブレット端末で動作する,インタプリタ型言語搭載マイコンのプログラミング環境を開発したので報告する.

Books and Other Publications

 22

Presentations

 40

Teaching Experience

 13

Research Projects

 2

Industrial Property Rights

 23