DAInamite. Team Description for RoboCup 2013

Similar documents
DAInamite. Team Description for RoboCup 2013

DAInamite. Team Description 2014

DAInamite. Team Description 2016

beestanbul RoboCup 3D Simulation League Team Description Paper 2012

Qualification Document for RoboCup 2016

RoboCup Standard Platform League (NAO) Technical Challenges

Neural Network in Computer Vision for RoboCup Middle Size League

University of Amsterdam. Faculty of Science The Netherlands. Dutch Nao Team. Technical Report. October

Emergent walking stop using 3-D ZMP modification criteria map for humanoid robot

BURST Team Report RoboCup 2009 SPL

ZSTT Team Description Paper for Humanoid size League of Robocup 2017

The Standard Platform League

UvA-DARE (Digital Academic Repository) Intelligent Robotics Lab Verschoor, C.R.; de Kok, P.M.; Visser, A. Link to publication

A new AI benchmark. Soccer without Reason Computer Vision and Control for Soccer Playing Robots. Dr. Raul Rojas

Robots With Legs. Helge Wrede

Pose Estimation for Robotic Soccer Players

Open Research Online The Open University s repository of research publications and other research outputs

Designing falling motions for a humanoid soccer goalie

LegenDary 2012 Soccer 2D Simulation Team Description Paper

Evaluation of the Performance of CS Freiburg 1999 and CS Freiburg 2000

#19 MONITORING AND PREDICTING PEDESTRIAN BEHAVIOR USING TRAFFIC CAMERAS

RoboCup Soccer Leagues

A Walking Pattern Generation Method for Humanoid robots using Least square method and Quartic polynomial

Genetic Algorithm Optimized Gravity Based RoboCup Soccer Team

Robocup 2010 Version 1 Storyboard Prototype 13 th October 2009 Bernhard Hengst

CYRUS 2D simulation team description paper 2014

B-Human 2017 Team Tactics and Robot Skills in the Standard Platform League

RoboCup Humanoid League 2003 Rules

Sensing and Modeling of Terrain Features using Crawling Robots

Lecturers. Multi-Agent Systems. Exercises: Dates. Lectures. Prof. Dr. Bernhard Nebel Room Dr. Felix Lindner Room

Robot motion by simultaneously wheel and leg propulsion

DISTRIBUTED TEAM FORMATION FOR HUMANOID ROBOT SOCCER

Team CHARLI: RoboCup 2012 Humanoid AdultSize League Winner

Design of a double quadruped for the Tech United soccer robot

The Sweaty 2018 RoboCup Humanoid Adult Size Team Description

Walking Gait Generation with Foot Placement Control for the HOAP-3 Humanoid Robot

CSU_Yunlu 2D Soccer Simulation Team Description Paper 2015

ITAndroids 2D Soccer Simulation Team Description 2016

Learning to Improve Capture Steps for Disturbance Rejection in Humanoid Soccer

The CMUnited-98 Champion Small-Robot Team

Modeling Planned and Unplanned Store Stops for the Scenario Based Simulation of Pedestrian Activity in City Centers

Dynamic Omnidirectional Kicks on Humanoid Robots

FUT-K Team Description Paper 2016

UNDERGRADUATE THESIS School of Engineering and Applied Science University of Virginia. Finding a Give-And-Go In a Simulated Soccer Environment

Faster and Smoother Walking of Humanoid HRP-2 with Passive Toe Joints *

A Generalised Approach to Position Selection for Simulated Soccer Agents

Cricket umpire assistance and ball tracking system using a single smartphone camera

OXSY 2016 Team Description

Robots as Individuals in the Humanoid League

UT Austin Villa D Simulation Team Report

STRATEGY PLANNING FOR MIROSOT SOCCER S ROBOT

ZMP Trajectory Generation for Reduced Trunk Motions of Biped Robots

MPCS: Develop and Test As You Fly for MSL

New Experiences with the RoboNewbie Software

Programming the Goalkeeper of 20 DOF for FIRA Cup

The RoboCup 2013 Drop-In Player Challenges: A Testbed for Ad Hoc Teamwork

Ranger Walking Initiation Stephanie Schneider 5/15/2012 Final Report for Cornell Ranger Research

Master s Project in Computer Science April Development of a High Level Language Based on Rules for the RoboCup Soccer Simulator

RoboCup SPL 2015 Champion Team Paper

Benchmarking Robot Cooperation without Pre-Coordination in the RoboCup Standard Platform League Drop-In Player Competition

A comprehensive evaluation of the methods for evolving a cooperative team

SAFE DESIGN OF RURAL ROADS BY NORMALIZED ROAD CHARACTERISTICS

Karachi Koalas 3D Simulation Soccer Team Team Description Paper for World RoboCup 2014

Multi-Agent Collaboration with Strategical Positioning, Roles and Responsibilities

Slip Observer for Walking on a Low Friction Floor

Three years of the RoboCup standard platform league drop-in player competition. Katie Genter, Tim Laue & Peter Stone

Specifications. The Field:

A Table Soccer Robot Ready for the Market

Trajectory Planning for Smooth Transition of a Biped Robot

Northern Bites 2009 Team Report

RoboCupJunior Soccer Rules 2009

PERCEPTIVE ROBOT MOVING IN 3D WORLD. D.E- Okhotsimsky, A.K. Platonov USSR

Pokemon Robotics Challenge: Gotta Catch em All 2.12: Introduction to Robotics Project Rules Fall 2016

STEPHEN KIMBROUGH DAMJAN ŠTRUS SHOOT TO THRILL

Self-Driving Vehicles That (Fore) See

THe rip currents are very fast moving narrow channels,

Mixed Reality Competition Rules

Kungl Tekniska Högskolan

Centre for Autonomous Systems

Stability Control of Bipedal Walking Robot

arxiv: v1 [cs.ro] 28 Feb 2017

ICS 606 / EE 606. RoboCup and Agent Soccer. Intelligent Autonomous Agents ICS 606 / EE606 Fall 2011

UT Austin Villa: RoboCup D Simulation League Competition and Technical Challenge Champions

Generation of Robot Motion Based on Measurement of Human Movement. Susumu Sakano 1, Satoru Shoji 1

Pedestrian Dynamics: Models of Pedestrian Behaviour

Kenzo Nonami Ranjit Kumar Barai Addie Irawan Mohd Razali Daud. Hydraulically Actuated Hexapod Robots. Design, Implementation. and Control.

Adaptation of Formation According to Opponent Analysis

Stable Upright Walking and Running using a simple Pendulum based Control Scheme

Principles of Knowledge Representation and Reasoning. Principles of Knowledge Representation and Reasoning. Lecturers. Lectures: Where, When, Webpage

Principles of Knowledge Representation and Reasoning

Development of a Locomotion and Balancing Strategy for Humanoid Robots

Learning of Cooperative actions in multi-agent systems: a case study of pass play in Soccer Hitoshi Matsubara, Itsuki Noda and Kazuo Hiraki

Performance of Fully Automated 3D Cracking Survey with Pixel Accuracy based on Deep Learning

Intelligent Decision Making Framework for Ship Collision Avoidance based on COLREGs

AN UNDERWATER AUGMENTED REALITY SYSTEM FOR COMMERCIAL DIVING OPERATIONS

Evaluation of the depth camera based SLAM algorithms

Line Following with RobotC Page 1

Design of a Pedestrian Detection System Based on OpenCV. Ning Xu and Yong Ren*

IEEE RAS Micro/Nano Robotics & Automation (MNRA) Technical Committee Mobile Microrobotics Challenge 2016

Biorobotic Locomotion: Biped Humanoid Walking. using Optimal Control

Transcription:

DAInamite Team Description for RoboCup 2013 Axel Heßler, Yuan Xu, Erdene-Ochir Tuguldur and Martin Berger {axel.hessler, yuan.xu, tuguldur.erdene-ochir, martin.berger}@dai-labor.de http://www.dainamite.de DAI-Labor, Technische Universität Berlin, Germany 1 Introduction The team DAInamite from DAI-Labor of Technical University Berlin wants to take part in the Standard Platform League (SPL) in the RoboCup 2013 event in Eindhoven, Netherlands with its five NAOs V4 H21 (RoboCup edition). We are new to this league, but not so new to the RoboCup competitions as we started with the 2D soccer simulation league in 2006 (see e.g. [1 3]). We have collected first experiences regarding the SPL during the RoboCup German Open 2012. We implemented a simple agent with a sense-think-act execution cycle in the Java programming language, and used as many of Aldebaran s modules as possible. We collected additional practical experience during a 14-days event called Ideenpark 1 in August 2012. There we played demo games together with members of the NAO Devils SPL team from TU Dortmund, and learned a lot from them about humanoid robot soccer. After these two events we started from scratch again following a simple methodology: we use modules, which are available and working from Aldebaran. Therefore, we can focus on problems that are more benficial or interesting. We rely on Aldebaran s NAOqi architecture as a communication infrastructure for modules. We prototype our ideas and concepts in the Python programming language. Only time critical parts such as motion and vision are implemented in C/C++. 2 Team Members The team consists of Bachelor and Master students from the Technical University Berlin (TU-Berlin). They are backed by student assistants and researchers working at the Distributed Artificial Intelligence (DAI) Laboratory, which is directed by Prof. Dr. Sahin Albayrak. The current team members are: Reserchers working at the DAI-Lab: Axel Heßler (team leader), Erdene-Ochir Tuguldur, Martin Berger, Yuan Xu and Felix Bonowski. 1 http://www.ideenpark.de/ideenpark/funbox/roboterfussball

Student assistants working at the DAI-Lab: Lars Borchert, Stephen Prochnow, Philipp Brock and Leily Behnam Sani. Undergraduate students at TU-Berlin: Ruth Motza, Nadine Rohde, Thomas Schlegel, Arne Salzwedel and Lukas Weise. Prof. Dr. Dr. h.c. Sahin Albayrak is the head of the chair Agent Technologies in Business Applications and Telecommunication (AOT) at the technical University Berlin. He is also the founder and head of the DAI-Lab. The DAI-Lab performs applied research and development of new systems and services and apply and test the solutions in real environments to make them tangible for users. Current application fields are, for example, electromobility, smart grid, home, health, ambient assisted living, security, and service robotics. Concrete research fields of our team members are agent-oriented software engineering, agent testbeds, cooperation and coordination, machine learning, planning and scheduling, computer vision, and human-robot interaction. The main application field of our NAOs is teaching agent-oriented and robotic principles. 3 Architecture We are using Aldebaran s NAOqi architecture as the basis. As mentioned earlier, the main policy was to use what was already available and working. Missing functionality is implemented by adding our own NAOqi modules. These new modules are written in either Python or C++, depending on the requirements. In the following those modules are described. 3.1 Motion We have developed our own motion module in C++ for better performance in soccer games. Especially, we implemented a fast (20 cm/s) omni-directional walk based on Linear Inverted Pendulum [4]. Fig. 1. Architecture of motion module, see text for details.

We also keep our motion module compatible with other modules from Aldebaran. APIs of ALMotion are implemented, and functionality such as Selfcollision avoidance, a simple Fall Manager, and Smart Stiffness were developed as well. Furthermore, the module also provides camera s homogenous transformation matrix defived from the robot s configuration and odometry data for other modules. In order to test and debug our motion module easily, we divided it into several different sub-modules, see Figure 1. The DAIMotion can run as a local or remote module. It accesses the DCM through shared memory when it is remote. The module can also run with recorded logfiles and the SimSpark simulator. 3.2 Vision Our team has reimplemented a calibration free vision algorithm proposed in [5]. Like our motion module, the vision module is also implemented in C/C++ and replaces the Aldebaran video device module. To accelerate the image capturing, we are using Video4Linux directly to capture images from both cameras in parallel. By default, the vision module processes only images from the bottom camera. But if requested, the images from the top camera are processed in parallel to locate the goal posts faster. However, the vision cannot guarantee to process both frames in time when two cameras are used. Fig. 2. Original image (left) and a visualization of the processing results(right), the field and detected entities are highlighted. The goal posts are marked in yellow, the field border in magenta, the lines in white and the ball in red color. Currently, the vision module is able to detect the goal posts, the field border, lines and the ball. The results are written into the agents central memory managed by Aldebaran s module ALMemory. Other modules, such as localization, can read the results from this memory. An exemplary result of the vision processing is depicted in figure 2. Detected objects are highlighted in different colors. Furthermore, the vision module provides methods for debugging and configuring, such as setting camera parameters and enabling/disabling cameras.

3.3 Localization Localization is prototyped in Python at the moment. Similar to other teams [6, 7], we use a Kalman Filter for this purpose. The measurement of robot s pose is calculated by recognized goal posts, and the prediction of robot s pose is provided by odometry from motion module. The robot s pose has to be tracked continuously, to distinguish the opponent s goal from our own. 3.4 Ball tracking Like our other high level components, the ball tracking is implemented in Python. Our vision module reliably detects the ball s position if a ball is present in the image and is not heavily occluded. If no ball is present, the vision occasionally returns false positives. To cope with this situation, we have implemented a simple multiple model Kalman filter as in [6]. The global ball position is transformed from the pixel coordinates to the global position on the soccer field using the camera matrix. The ball tracking uses the global ball position as measurement, and returns a list of ball hypotheses containing the predicted ball position, velocity and the kalman filter error. 3.5 Behavior Fig. 3. Snapshot of the goalie performing a save while staying up straight (left) and by falling to the side (right). A prototyped soccer behavior is implemented in Python. In the beginning of the game all players assume they are on their half of the field to localize themselves, and they are able go to the kick-off positions autonomously. Currently, there are three roles defined in our NAO team. The robots communicate with each other and share their perceptions: own position, goal positions and ball position. We plan to use the ball perception of the goalie to break the symmetry of two yellow goals. Striker: The field player who is closest to the ball becomes the striker. It approaches the ball and positions itself behind it into the direction of what

it considers to be the opponent s goal. The robot then executes a kick-motion and repeats. Supporter: The field players who are not currently assigned the striker role become supporters. Supporters do not actively pursue the ball. In the future they should position themselves strategically. Goalie: The goalie tries to stay roughly in the middle between its own two goal posts. During the game, the goalie computes the direction of the ball movement by using the information provided by the ball tracking module. The robot tries to parry if the direction of the ball points towards its own goal and the ball s velocity exceeds a threshold. It will then try to block the ball by either staying up straight and lowering one arm to the ground or executing a save falling to the side as depicted in Figure 3 respectively. 4 Conclusion We described our approach to soccer playing NAOs. We showed our first solutions to major problems related to humanoid robot soccer. We can imagine large improvements when we solve special problems in depth. One of the next steps will be to give the supporter role more meaningful behavior. Acknowledgments Large influences came from four German teams, namely: Nao Team Humboldt, Nao Devils TU Dortmund, B-Human Bremen University and Nao-Team HTWK Leipzig. We mainly learned about concepts and algorithms from them and want to thank them at this point. But we did not use any code of them so far. References 1. Endert, H., Wetzker, R., Karbe, T., Heßler, A., Brossmann, F.: The dainamite agent framework. Technical report, Dai-labor TU Berlin (2006) 2. Endert, H., Karbe, T., Krahmann, J., Trollmann, F., Kuhnen, N.: The dainamite 2008 team description. RoboCup 2008 (2008) 3. Hessler, A., Berger, M., Endert, H.: Dainamite 2011 team description paper. Robocup 2011 (2011) 4. Kajita, S., Kanehiro, F., Kaneko, K., Fujiwara, K., Harada, K., Yokoi, K., Hirukawa, H.: Biped walking pattern generation by using preview control of zero-moment point. In: ICRA. (2003) 1620 1626 5. Reinhardt, T.: Kalibrierungsfreie Bildverarbeitungsalgorithmen zur echtzeitfähigen Objekterkennung im Roboterfußball. Master s thesis, Hochschule für Technik, Wirtschaft und Kultur Leipzig (2011) 6. Quinlan, M.J., Middleton, R.H.: Multiple model kalman filters: a localization technique for robocup soccer. In Baltes, J., Lagoudakis, M.G., Naruse, T., Ghidary, S.S., eds.: RoboCup 2009. Springer-Verlag, Berlin, Heidelberg (2010) 276 287 7. Jochmann, G., Kerner, S., Tasse, S., Urbann, O.: Efficient Multi-Hypotheses Unscented Kalman Filtering for Robust Localization. In: RoboCup 2011: Robot Soccer World Cup XV, Springer (2012)