DAInamite. Team Description 2014

Similar documents
DAInamite. Team Description for RoboCup 2013

DAInamite. Team Description 2016

DAInamite. Team Description for RoboCup 2013

beestanbul RoboCup 3D Simulation League Team Description Paper 2012

BURST Team Report RoboCup 2009 SPL

University of Amsterdam. Faculty of Science The Netherlands. Dutch Nao Team. Technical Report. October

Emergent walking stop using 3-D ZMP modification criteria map for humanoid robot

Qualification Document for RoboCup 2016

RoboCup Standard Platform League (NAO) Technical Challenges

Evaluation of the Performance of CS Freiburg 1999 and CS Freiburg 2000

Neural Network in Computer Vision for RoboCup Middle Size League

RoboCup SPL 2015 Champion Team Paper

Pose Estimation for Robotic Soccer Players

Robocup 2010 Version 1 Storyboard Prototype 13 th October 2009 Bernhard Hengst

ZSTT Team Description Paper for Humanoid size League of Robocup 2017

The Sweaty 2018 RoboCup Humanoid Adult Size Team Description

Designing falling motions for a humanoid soccer goalie

OXSY 2016 Team Description

Open Research Online The Open University s repository of research publications and other research outputs

Robots as Individuals in the Humanoid League

A Generalised Approach to Position Selection for Simulated Soccer Agents

UvA-DARE (Digital Academic Repository) Intelligent Robotics Lab Verschoor, C.R.; de Kok, P.M.; Visser, A. Link to publication

The Standard Platform League

#19 MONITORING AND PREDICTING PEDESTRIAN BEHAVIOR USING TRAFFIC CAMERAS

A new AI benchmark. Soccer without Reason Computer Vision and Control for Soccer Playing Robots. Dr. Raul Rojas

Learning to Improve Capture Steps for Disturbance Rejection in Humanoid Soccer

A Walking Pattern Generation Method for Humanoid robots using Least square method and Quartic polynomial

Genetic Algorithm Optimized Gravity Based RoboCup Soccer Team

ITAndroids 2D Soccer Simulation Team Description 2016

Robots With Legs. Helge Wrede

RoboCup Soccer Leagues

The RoboCup 2013 Drop-In Player Challenges: A Testbed for Ad Hoc Teamwork

RoboCup Humanoid League 2003 Rules

Design of a double quadruped for the Tech United soccer robot

Robot motion by simultaneously wheel and leg propulsion

FUT-K Team Description Paper 2016

UT Austin Villa: RoboCup D Simulation League Competition and Technical Challenge Champions

Cricket umpire assistance and ball tracking system using a single smartphone camera

Dynamic Omnidirectional Kicks on Humanoid Robots

MPCS: Develop and Test As You Fly for MSL

Karachi Koalas 3D Simulation Soccer Team Team Description Paper for World RoboCup 2014

SAFE DESIGN OF RURAL ROADS BY NORMALIZED ROAD CHARACTERISTICS

CYRUS 2D simulation team description paper 2014

ZMP Trajectory Generation for Reduced Trunk Motions of Biped Robots

RoboCup German Open D Simulation League Rules

Walking Gait Generation with Foot Placement Control for the HOAP-3 Humanoid Robot

LegenDary 2012 Soccer 2D Simulation Team Description Paper

Lecturers. Multi-Agent Systems. Exercises: Dates. Lectures. Prof. Dr. Bernhard Nebel Room Dr. Felix Lindner Room

Modeling Planned and Unplanned Store Stops for the Scenario Based Simulation of Pedestrian Activity in City Centers

Tech United Eindhoven Middle Size League Winner 2016

Master s Project in Computer Science April Development of a High Level Language Based on Rules for the RoboCup Soccer Simulator

Team CHARLI: RoboCup 2012 Humanoid AdultSize League Winner

IEEE RAS Micro/Nano Robotics & Automation (MNRA) Technical Committee Mobile Microrobotics Challenge 2016

RoboCup Soccer Simulation 3D League

Sensing and Modeling of Terrain Features using Crawling Robots

ITAndroids 2D Soccer Simulation Team Description Paper 2017

Generation of Robot Motion Based on Measurement of Human Movement. Susumu Sakano 1, Satoru Shoji 1

Slip Observer for Walking on a Low Friction Floor

ICS 606 / EE 606. RoboCup and Agent Soccer. Intelligent Autonomous Agents ICS 606 / EE606 Fall 2011

Evaluation of the depth camera based SLAM algorithms

COLLISION AVOIDANCE SYSTEM FOR BUSES, MANAGING PEDESTRIAN DETECTION AND ALERTS NEAR BUS STOPS

Adaptation of Formation According to Opponent Analysis

Multi-Agent Collaboration with Strategical Positioning, Roles and Responsibilities

B-Human 2017 Team Tactics and Robot Skills in the Standard Platform League

A comprehensive evaluation of the methods for evolving a cooperative team

DISTRIBUTED TEAM FORMATION FOR HUMANOID ROBOT SOCCER

New Experiences with the RoboNewbie Software

Karachi Koalas 3D Simulation Soccer Team Team Description Paper for World RoboCup 2013

Golf Ball Impact: Material Characterization and Transient Simulation

PARSIAN 2017 Extended Team Description Paper


Deakin Research Online

UT Austin Villa D Simulation Team Report

FRA-UNIted Team Description 2017

CSU_Yunlu 2D Soccer Simulation Team Description Paper 2015

Pedestrian traffic flow operations on a platform: observations and comparison with simulation tool SimPed

Learning of Cooperative actions in multi-agent systems: a case study of pass play in Soccer Hitoshi Matsubara, Itsuki Noda and Kazuo Hiraki

UT Austin Villa: RoboCup D Simulation League Competition and Technical Challenges Champions

Motion Control of a Bipedal Walking Robot

The PyMca Application and Toolkit V.A. Solé - European Synchrotron Radiation Facility 2009 Python for Scientific Computing Conference

Northern Bites 2009 Team Report

RoboCup Soccer Simulation 3D League

WALKING MOTION ANALYSIS USING SMALL ACCELERATION SENSORS

THE UNIVERSITY OF NEW SOUTH WALES SCHOOL OF COMPUTER SCIENCE AND ENGINEERING. Walking Nao. Omnidirectional Bipedal Locomotion

Phoenix Soccer 2D Simulation Team Description Paper 2015

the world s most advanced humanoid robot

Kungl Tekniska Högskolan

Centre for Autonomous Systems

CS 4649/7649 Robot Intelligence: Planning

arxiv: v1 [cs.ro] 28 Feb 2017

Intelligent Decision Making Framework for Ship Collision Avoidance based on COLREGs

Faster and Smoother Walking of Humanoid HRP-2 with Passive Toe Joints *

THe rip currents are very fast moving narrow channels,

Inverse Kinematics Kicking in the Humanoid RoboCup Simulation League

Trajectory Planning for Smooth Transition of a Biped Robot

Dynamically stepping over large obstacle utilizing PSO optimization in the B4LC system

SegwayRMP Robot Football League Rules

Pedestrian Dynamics: Models of Pedestrian Behaviour

Development of a Locomotion and Balancing Strategy for Humanoid Robots

CSE 190a Project Report: Golf Club Head Tracking

Transcription:

DAInamite Team Description 2014 Axel Heßler, Yuan Xu, Erdene-Ochir Tuguldur and Martin Berger {axel.hessler, yuan.xu, tuguldur.erdene-ochir, martin.berger}@dai-labor.de http://www.dainamite.de DAI-Labor, Technische Universität Berlin, Germany Abstract. This document describes the progress of the team DAInamite from DAI-Lab, TU-Berlin (Germany) regarding the SPL. We give a brief overview of the team s history, constitution, our general architecture, its relevant components (motion, vision, and behavior), as well as employed tools, and point out recent work. 1 Introduction Team DAInamite s origin lies within the 2D soccer simulation league, in which it participated a few times since 2006 [1 4]. DAInamite s first appearance in the SPL league was at the German Open 2012 in Magdeburg. And its first participation in the world championship in the SPL was in RoboCup 2013 in Eindhoven [5], reaching the quater finals. In addition to C ++, Python is mainly used in the team s code. The timecritical components for motion, and vision are implemented in C ++. The remaining modules such as localization, behavior, and ball-tracking are implemented in Python. 2 Team Constitution The team is constituted of undergradutes, graduate students, and postdocs of TU Berlin, mainly from faculty IV (CS&EE) 1. The team is hosted at the chair Agent Technologies in Business Applications and Telecommunication (AOT) and the DAI-Laboratories (DAI-Lab) at TU Berlin. Prof. Sahin Albayrak is the head of chair AOT and founder and head of the DAI-Lab. Concrete research interests of the team members are agent-oriented software engineering, autonomous systems, cooperation & coordination, and human-robot-interaction. 1 http://www.tu-berlin.de, http://www.dai-labor.de, http://www.aot.tu-berlin.de/, http://www.dainamite.de

3 Architecture We are using Aldebaran s NAOqi architecture as the basis, to benefit from the following features: Call functions in both C ++ and Python, and the ability to execute functions also remotely over the network. To archive higher computational performance during soccer games, when the system is run as a whole, we embed the Python code (using Boost Python) as a local module into NAOqi s process to remove communication overhead. Our main policy was to first add missing functionality as necessary and then replace modules to imporve performance. Depending on performance requirements these modules are written in either Python or C ++. 3.1 Motion We developed our own motion module in C ++ for better performance in soccer games. Especially, we are using a fast (20 cm/s) omni-directional walk based on Linear Inverted Pendulum [6]. Fig. 1. Architecture of motion module. The motion module is compatible with the API of the original motion module, shipped by Aldebaran. The basic API and functionalities (such as Selfcollision avoidance, Smart Stiffness, etc.) of ALMotion are implemented, so our motion module can serve as an enhanced replacement of ALMotion. Additionally, our motion module provides to other modules odometry data, and homogenous transformation matrices for both cameras, derived from the robot s configuration. In order to test and debug our motion module easily, we divided it into several different sub-modules, see Figure 1. As the core part, DAIMotion can run as a local or a remote module. It accesses the DCM through shared memory when it is run locally on the robot; and data is communicated via NAOqi when it is run with recorded logfiles or the SimSpark simulator.

3.2 Vision Our team uses a vision algorithm strongly inspired by [7]. Like our motion module, the vision module is also implemented in C/C++ and replaces the Aldebaran video device module. To accelerate image aquisition, we are using Video4Linux to capture images from both cameras in parallel. Fig. 2. Original image (left) and a visualization of the processing results (right) for a sample image. Detected elements are highlighted: Field border (magenta) and field pixels, goal posts (yellow), line-segments (white), and ball (red). Currently, the vision module is able to detect goal posts, ball, field lines, field border, and obstacles. Visual obstacle detection is done using blob detection on a downsampled image, treating regions with a color that differs from the detected field color as obstacles, ignoring areas of prior detected objects (i.e. goal posts and lines). An exemplary result of the vision processing is pictured in figure 2, where detected entities are highlighted in different colors. These results are then written into the agent s blackboard memory managed by Aldebaran s module (ALMemory), from which other modules, such as localization, can retrieve them. Furthermore, the vision module provides methods for debugging and configuration, such as setting camera parameters and enabling or disabling processing of either camera. 3.3 Ball tracking Like other high level components, the ball tracking is also implemented in Python. Our vision module fails to detect a ball if its present in the image, not further away than six meters, and not heavily occluded. Because of the magenta jerseys introduced in RoboCup 2013 [8], we adjusted the ball detection to cope with multiple red peaks on the field. Within the area of each detected obstacle the ball detection is run additionally and these hypotheses are checked and filtered afterwards. If no ball is present in the current image, the vision may occasionally return false positives. To cope with this situation, we are using a multi-model Kalman

filter (i.a. as described in [9]). The detected balls positions are tracked in 2D, relative to the robot. These measurements are then integrated into the best fitting hypothesis or spawn new hyptoheses, if deemed outliers for the existing hypotheses. 3.4 Localization For localization, a particle filter is implemented in Python using NumPy and SciPy. Perceived goalposts, field lines, and the field s border are used as features for evaluating the hypotheses for the robot s position. Odometry for the predict phase is provided by the motion module. At the beginning of each kick-off and when re-entering the game after having been penalized, for example, players assume they are within their own half of the field, according to the rules. Symmetry is not actively broken, so the robots continuously track their position, to distinguish the opponent s goal from their own. To reduce the risk of scoring on our own goal in case a field player becomes delocalized, the goalie informs all players when the ball is approaching its goal, so players can correct their orientation if necessary. This only works under the assumption, the goalie is localized and in the correct position. 3.5 Behavior TurnAroundBallToOpponentGoal is ready to shoot is facing to opponent goal Shoot is far to ball Dribble is close to ball is ball lost is far to ball SearchForBall GoToBall LookAroundForBall is ball lost ApproachBall is current state successful can not find ball in current pose is ball found is collision detected right is collision detected left TurnForBall AvoidRight AvoidLeft Striker Fig. 3. Generated visualization showing a part of the active striker s behavior during gamestate Playing

The behavior is implemented in Python using hierarchical state machines. A visual representation of the agents behavior can be generated from these state machines using DOT [10]. Figure 3 shows an examplary image of a part of our striker s behavior. The Team viewer can display and log all the communicated data from robots and Game Controller. This helps to analyze game situations post mortem. The robots communicate with each other and share their own position, the ball s position, and their behaviour s currently active state. 4 Tools We mostly use different small tools for particular purposes. For quick experiments with Nao, we created an extension to IPython, termed inao. It provides an enhanced interactive Python shell to interact with NAO, providing fast access to all the module s proxies (motion, memory, vision). Please watch the video 2 to get an impression of its use. To help assessing the effects of code-changes (primarilly for behavior) to the team s in-game perfomance, we are using a modified version of the SimSpark 3D soccer simulator from Nao-Team Humboldt 3 and continuous integration to let recent versions play against their predecessors. Fig. 4. Team Viewer showing a replay of a situation in a game. During games the Team Viewer (figure 4) helps in evaluating the overall state of the robots as a team. It also records the team-communication send by the robots and packets sent by the game controller. To a limited extend this 2 http://youtu.be/3e69bmgih7i 3 https://github.com/berlinunited/simspark-spl

enables us to replay certain situation for the robot s behavior and evaluate its response. 5 Conclusion and Future Work We gave an overview of our architecture and approaches to the major algorithmic challenges that have to be faced in humanoid robotic soccer. Recent work has been started in robot detection and automating calibration (for joints, camera matrix, and color) to reduce setup efforts. Also we are currently working on improving our localization, since it is still fragile in the face of visual occlusions. References 1. Endert, H., Wetzker, R., Karbe, T., Heßler, A., Brossmann, F.: The dainamite agent framework. Technical report, Dai-labor TU Berlin (2006) 2. Endert, H., Karbe, T., Krahmann, J., Trollmann, F., Kuhnen, N.: The dainamite 2008 team description. RoboCup 2008 (2008) 3. Endert, H., Karbe, T., Krahmann, J., Trollmann, F.: The DAInamite 2009 team description. RoboCup 2009 (2009) 4. Hessler, A., Berger, M., Endert, H.: Dainamite 2011 team description paper. Robocup 2011 (2011) 5. Hessler, A., Xu, Y., Tuguldur, E.O., Berger, M.: DAInamite team description for robocup 2013. RoboCup-2013: Robot Soccer World Cup XVII (2013) 6. Kajita, S., Kanehiro, F., Kaneko, K., Fujiwara, K., Harada, K., Yokoi, K., Hirukawa, H.: Biped walking pattern generation by using preview control of zeromoment point. In: ICRA. (2003) 1620 1626 7. Reinhardt, T.: Kalibrierungsfreie Bildverarbeitungsalgorithmen zur echtzeitfähigen Objekterkennung im Roboterfußball. Master s thesis, Hochschule für Technik, Wirtschaft und Kultur Leipzig (2011) 8. RoboCup Technical Committee: Robocup standard platform league (nao) rule book (2013) 9. Quinlan, M.J., Middleton, R.H.: Multiple model kalman filters: a localization technique for robocup soccer. In Baltes, J., Lagoudakis, M.G., Naruse, T., Ghidary, S.S., eds.: RoboCup 2009. Springer-Verlag, Berlin, Heidelberg (2010) 276 287 10. Gansner, E.R., North, S.C.: An open graph visualization system and its applications to software engineering. Software - Practice and Experience 30(11) (2000) 1203 1233