Invited speakers
Professor Jacky Baltes Biography
University of Manitoba
Canada
URL: http://www.cs.umanitoba.ca/~jacky

“Recent Advances in Humanoid Robotics”


The talk describes recent advances in humanoid robotics research in the areas of active balancing and push recovery. The first part of the talk will describe Jennifer, the first ice hockey playing humanoid robot. As most beginning skaters will realize very quickly, ice skating is a more difficult than walking. The talk will analyze ice skating from the perspective of humanoid robotics control and will highlight the differences between skating and walking. This leads to a discussion of why typical approaches to humanoid robotics control such as zero moment point (ZMP) control do not work well on skating robots.
In the second part, a highly dynamic challenge task - balancing on a bongo board (also called balance board) - is introduced. A bongo board is a flat board that is put on top of a small barrel. This task is often performed by acrobats in the circuit, since it is difficult and people need to practice for a long time before they can balance on the board. Not surprisingly, this task is also very difficult for robots. The bongo board or related rod and cart system have often been analysed as an inverted pendulum and is a classical problem in control. Many control systems have been proposed including traditional control and more modern approaches such as neural networks, fuzzy logic, and reinforcement learning. However, implementing these approaches in practice is hard because of limits in the accuracy and update speed of sensors and actuators. We describe a predictive control approach that allows us to overcome these limitations and is able to balance the robot in a dynamically stable configuration.

Professor Hyun Myung Biography
KAIST
Korea
URL: http://urobot.kaist.ac.kr

“Robot Navigation and Monitoring Technologies for U-City”


In this talk, the core technologies to provide various services in ubiquitous city (U-City) will be introduced, mainly focusing on localization, autonomous navigation, and monitoring technologies.
  • Robot navigation
    - Wireless and Vision-based SLAM (Simultaneous Localization And Mapping)
    - Swarm robot formation control
    - JEROS (Jellyfish removal robot)
  • Structural Health Monitoring (SHM)
    - Vision-based SHM
    - Flying and wall-climbing robot for SHM

Professor Victor RASKIN Biography
Purdue University
USA
URL: http://web.ics.purdue.edu/~vraskin/Raskin.html

“Robotic Intelligence—a Specific Form of Artificial Intelligence”


Much success has been achieved in robotic hardware: today’s robots can perform physical activities that were not achievable only a few years ago: they are more stable, faster, carry more sensors, are more versatile. At the same time, progress, not so dramatic perhaps but stable, has been achieved in artificial intelligence (AI) and its applications. This paper focuses on bringing these two types of parallel achievements together, thus rendering the robots more autonomous, smarter, and usable in more applications as well as rendering AI theory testable and verifiable in practical applications. One reasonably mature approach to such a synthesis is the onthologization of the robotic world and aligning this ontology with the general property-rich multi-domain ontology, such as the one developed in Ontological Semantic Technology (OST) (cf. Raskin 2012, the Best Paper Award presentation at RITA 2012). This approach underlies the HARMS hybrid natural-language-based (NL-based) system of communication within a team of humans, robots, agents, etc. (see Matson et al. 2011, an ICARA presentation).

The specificity of AI in robotic intelligence is that the world of each individual robot is limited to its capabilities and functionalities, even though it is not as “primitive” as one might think (see Raskin 2012, again). As a result, any general command, whether in NL, such as English or Korean, or in any other ontologized form, that orders the robot to go somewhere and do something must be transposed into the very specific modes of activity for each robot. The paper will show that, seemingly an endless nuisance, this is not a unique situation but rather a domain-shift that is characteristic of purely human use of NL and has been proven to be computationally feasible in OST.

A special role in the development of the theoretical base and in computational implementation of this ontological process of specification/robotization is played by the crucial notion of scripts as goal-oriented strongly standardized sequences of actions. Thus, the typical ‘go and do something’ command for a human will typically involve getting dressed, collecting all the necessary items, getting into the car, driving to a familiar place and, say, depositing a check in an ATM. For the robot, a command to determine and report what the outside temperature is, will mean to activate and read the temperature sensor and then to determine the best way to communicate it to the ordering human and/or another robot. If this operation requires a physical contact or proximity, the determination of its own and the other party’s location and figuring out the shortest and safest route, with obstacle-avoidance becomes part of the script. The detection and determination of the scripts is essential for any information processing system but, for the robots, the problem seems both simpler because of the limited world of the robot, and more complex because every seemingly trivial process must be explicated without any possibility of glossing over it.

The last point of the paper is that extending the robotic intelligence will also extend the robotic applications as well as their marketability. A fire-fighting robot, for instance, that is capable a significant number of chips, may take over its own navigation, thus dispensing with human tele-guidance, carry a large number of sensors, analyze an image on its camera on its own, thus differentiating between a sack of sand or an unconscious human, communicate with other fire-fighting and different-specialization robots, such as those in charge of medical evacuation from an environment that is harmful to a human, and perform other well-defined human tasks in HARMS-type team environment. Notably, many such capabilities will extend beyond fire-fighting, thus making the robots usable and marketable for a wider spectrum of applications.

The author would like to acknowledge his indebtedness to Profs. Julia M. Taylor and Eric T. Matson of Purdue’s M2M Lab as well as their and his own graduate students/research assistants, current and former, for their valuable contributions to his conceptualizing the ideas and methods of this research. He is fully responsible, however, for the statements in this paper without implying that his colleagues fully share all of them.

Professor Henrik Hautop Lund Biography
Technical University of Denmark
Denmark
URL: www.playware.dk

“Playware from Distributed Robotics”


Playware is defined as intelligent hardware and software that creates play and playful experiences for users of all ages. With recent technology development, we become able We conceptualized the approach of modular playware in the form of building blocks to exploit distributed robotics, modern artificial intelligence and embodied artificial life to create playware which acts as a play force that inspires and motivates users to enter into a play dynamics. Building blocks should allow easy and fast expert-driven or user-driven development of playware applications for a given application field, and the find their inspiration from distributed robotics.
Distributed robotics takes many forms, for instance as multi-robots, modular robots and self-reconfigurable robots. The understanding and development of such advanced robotic systems demands extensive knowledge in engineering and computer science. We introduce a method that allows a change of representation of the problems, related to multi-robot control and human-robot interaction control from a virtual to a physical representation focusing on interactive parallel and distributed processing programming as the foundation for distributed robotics and human-robot interaction development. Further, the proposed method provides a system for bringing into education a vast number of issues (such as parallel programming, distribution, communication protocols, master dependency, connectivity, topology, island modeling software behavioral models, adaptive interactivity, feedback, user interaction, etc.). We illustrate how the proposed system can be considered a tool for easy, fast, and flexible hands-on exploration of these distributed robotic issues, and show how to implement interactive parallel and distributed processing in robotics with different software models such as open loop, randomness based, rule based, user interaction based, AI and ALife based, and morphology based.
Further, the talk will present the design principles for creating such modular playware technology with focus on the embodied AI principles that forms the foundation for the design principles of modular playware technology. I will exemplify the design principles with practical applications from the fields of play, sports, music, performance art, and health.