beestanbul RoboCup 3D Simulation League Team Description Paper 2012

Similar documents
DISTRIBUTED TEAM FORMATION FOR HUMANOID ROBOT SOCCER

Karachi Koalas 3D Simulation Soccer Team Team Description Paper for World RoboCup 2013

Karachi Koalas 3D Simulation Soccer Team Team Description Paper for World RoboCup 2014

LegenDary 2012 Soccer 2D Simulation Team Description Paper

FUT-K Team Description Paper 2016

OXSY 2016 Team Description

Robot motion by simultaneously wheel and leg propulsion

ITAndroids 2D Soccer Simulation Team Description 2016

ICS 606 / EE 606. RoboCup and Agent Soccer. Intelligent Autonomous Agents ICS 606 / EE606 Fall 2011

RoboCup-99 Simulation League: Team KU-Sakura2

Multi-Agent Collaboration with Strategical Positioning, Roles and Responsibilities

Learning of Cooperative actions in multi-agent systems: a case study of pass play in Soccer Hitoshi Matsubara, Itsuki Noda and Kazuo Hiraki

A comprehensive evaluation of the methods for evolving a cooperative team

CSU_Yunlu 2D Soccer Simulation Team Description Paper 2015

LOCOMOTION CONTROL CYCLES ADAPTED FOR DISABILITIES IN HEXAPOD ROBOTS

The Sweaty 2018 RoboCup Humanoid Adult Size Team Description

CYRUS 2D simulation team description paper 2014

Evaluation of the Performance of CS Freiburg 1999 and CS Freiburg 2000

Neural Network in Computer Vision for RoboCup Middle Size League

Robot Walking with Genetic Algorithms

FCP_GPR_2016 Team Description Paper: Advances in using Setplays for simulated soccer teams.

DAInamite. Team Description for RoboCup 2013

Adaptation of Formation According to Opponent Analysis

A Generalised Approach to Position Selection for Simulated Soccer Agents

Team Description: Building Teams Using Roles, Responsibilities, and Strategies

Cyrus Soccer 2D Simulation

RoboCup Soccer Leagues

Walking Gait Generation with Foot Placement Control for the HOAP-3 Humanoid Robot

MarliK 2016 Team Description Paper

BURST Team Report RoboCup 2009 SPL

ROSEMARY 2D Simulation Team Description Paper

UT Austin Villa: RoboCup D Simulation League Competition and Technical Challenge Champions

STRATEGY PLANNING FOR MIROSOT SOCCER S ROBOT

Emergent walking stop using 3-D ZMP modification criteria map for humanoid robot

Genetic Algorithm Optimized Gravity Based RoboCup Soccer Team

A new AI benchmark. Soccer without Reason Computer Vision and Control for Soccer Playing Robots. Dr. Raul Rojas

DAInamite. Team Description 2014

#19 MONITORING AND PREDICTING PEDESTRIAN BEHAVIOR USING TRAFFIC CAMERAS

The CMUnited-98 Champion Small-Robot Team

UT Austin Villa: RoboCup D Simulation League Competition and Technical Challenges Champions

Master s Project in Computer Science April Development of a High Level Language Based on Rules for the RoboCup Soccer Simulator

ITAndroids 2D Soccer Simulation Team Description Paper 2017

Dynamically stepping over large obstacle utilizing PSO optimization in the B4LC system

Genetic Programming of Multi-agent System in the RoboCup Domain

Human vs. Robotic Soccer: How Far Are They? A Statistical Comparison

Playing with Agent_SimpleSoccer. 1. Start the server 2. Start the agent(s) in NetBeans 3. Start the game: press k and b in the monitor window

Intelligent Decision Making Framework for Ship Collision Avoidance based on COLREGs

Motion Control of a Bipedal Walking Robot

Jaeger Soccer 2D Simulation Team Description. Paper for RoboCup 2017

Robots as Individuals in the Humanoid League

DAInamite. Team Description for RoboCup 2013

Robotics and Autonomous Systems

Building the Playing Style Concepts

Robotics and Autonomous Systems

Phoenix Soccer 2D Simulation Team Description Paper 2015

EVOLVING HEXAPOD GAITS USING A CYCLIC GENETIC ALGORITHM

CS 4649/7649 Robot Intelligence: Planning

Using Online Learning to Analyze the Opponents Behavior

Inverse Kinematics Kicking in the Humanoid RoboCup Simulation League

The Incremental Evolution of Gaits for Hexapod Robots

THE UNIVERSITY OF HULL

Open Research Online The Open University s repository of research publications and other research outputs

Decompression Method For Massive Compressed Files In Mobile Rich Media Applications

AFRL-RI-RS-TR

DAInamite. Team Description 2016

Coaching Attacking Team Play: Addressing individual function in team attack

BAHIA RT - TEAM DESCRIPTION PAPER FOR 3D SIMULATION LEAGUE 2013

Ranger Walking Initiation Stephanie Schneider 5/15/2012 Final Report for Cornell Ranger Research

The RoboCup 2013 Drop-In Player Challenges: A Testbed for Ad Hoc Teamwork

YAN GU. Assistant Professor, University of Massachusetts Lowell. Frederick N. Andrews Fellowship, Graduate School, Purdue University ( )

Gait Evolution for a Hexapod Robot

Kungl Tekniska Högskolan

Centre for Autonomous Systems

Rules of Soccer Simulation League 2D

STP: Skills, Tactics and Plays for Multi-Robot Control in Adversarial Environments

FRA-UNIted Team Description 2017

The Coaching Hierarchy Part II: Spacing and Roles Tom Turner, OYSAN Director of Coaching August 1999 (Revised: December 2000)

A Neuromuscular Model of Human Locomotion and its Applications to Robotic Devices

ZMP Trajectory Generation for Reduced Trunk Motions of Biped Robots

Evolving Gaits for the Lynxmotion Hexapod II Robot

UT Austin Villa D Simulation Team Report

ABSTRACT 1 INTRODUCTION

Planning and Training

B-Human 2017 Team Tactics and Robot Skills in the Standard Platform League

UNDERGRADUATE THESIS School of Engineering and Applied Science University of Virginia. Finding a Give-And-Go In a Simulated Soccer Environment

RoboCup German Open D Simulation League Rules

Decentralized Autonomous Control of a Myriapod Locomotion Robot

AN AUTONOMOUS DRIVER MODEL FOR THE OVERTAKING MANEUVER FOR USE IN MICROSCOPIC TRAFFIC SIMULATION

Generalized Learning to Create an Energy Efficient ZMP-Based Walking

A Layered Approach to Learning Client Behaviors in the RoboCup Soccer Server

University of Amsterdam. Faculty of Science The Netherlands. Dutch Nao Team. Technical Report. October

The CMUnited-99 Champion Simulator Team

AKATSUKI Soccer 2D Simulation Team Description Paper 2011

Spring Locomotion Concepts. Roland Siegwart, Margarita Chli, Martin Rufli. ASL Autonomous Systems Lab. Autonomous Mobile Robots

1. 4 Corners Passing:

The Progression from 4v4 to 11v11

Using Decision Tree Confidence Factors for Multiagent Control

UvA-DARE (Digital Academic Repository) Intelligent Robotics Lab Verschoor, C.R.; de Kok, P.M.; Visser, A. Link to publication

PARSIAN 2017 Extended Team Description Paper

Co-evolutionary Approach to Design of Robotic Gait

Transcription:

beestanbul RoboCup 3D Simulation League Team Description Paper 2012 Baris Demirdelen, Berkay Toku, Onuralp Ulusoy, Tuna Sonmez, Kubra Ayvaz, Elif Senyurek, and Sanem Sariel-Talay Artificial Intelligence and Robotics Laboratory Istanbul Technical University Computer Engineering Department Istanbul, TURKEY http://air.cs.itu.edu.tr/beestanbul sariel@itu.edu.tr Abstract. The main objective of the beestanbul project is to develop an efficient software system to correctly model the behaviors of simulated Nao robots in a competitive environment. Several AI algorithms are being used and developed for contributing to robotics research while also advancing the quality of competitions in 3D Simulation League. This team description paper presents important aspects of the overall system design and outlines the methods used in different modules. 1 Introduction The beestanbul project from the Artificial Intelligence and Robotics laboratory (AIR lab) at Istanbul Technical University (ITU) is the first initiative from ITU to participate in RoboCup competitions. Earlier projects in the AIR lab were mainly on cooperative multirobot systems. This challenging project was initiated in 2009 to apply the experience, gained from earlier research on multirobot systems [1], to competitive environments as well. The beestanbul team consists of undergraduate and graduate students from the Computer Engineering Department of ITU. The main goal of the team is to contribute to the main objective of the RoboCup project by an efficient design of a software system. The designed software system serves as a basis to apply several high-level intelligence, reasoning and learning methods. beestanbul team have participated in the RoboCup competitions since 2010. The team won several games during the competitions. The software architecture of the system has been revised and several promising motion types have already been developed since 2010. With the completion of the high-level modules, the team is expected to achieve the project s main goal. The organization of the rest of the team description paper is as follows. Section 2 presents the software system architecture for simulated Nao robots in the SimSpark simulation environment. Motions and behaviors available for an

agent are presented in Sections 3 and 4 respectively. The developed team-strategy and planning methods are discussed in Section 5, followed by the conclusion in Section 6. 2 System Architecture The overall software system consists of several modules that interact with each other (Fig. 1). The Server Layer perfoms a two-way communication with the SimSpark server, decodes incoming messages and encodes outgoing messages. In order to carry out these operations, the SimSpark utilities and rcssnet library, provided by SimSpark, are used. SimSpark Server Layer Parser Server Communication Agent Layer World Model Agent Model Walking Engine Agent Communication Behaviour Planner Vision Team Strategy Localization Fig. 1. Overall Software Architecture. The Agent Layer is responsible for performing the main functionalities of an agent. Each agent maintains its own world model for the environment and the agent model for its own state. The Localization module is responsible for determining the correct pose of the agent using information gained from seen flags. Localization is performed by using triangulation method with Kalman

Filter using distance and angles to a flag. In situations where triangulation is not possible, odometry information is used. The Vision module is responsible for determining the positions of the observed objects in the environment. Based on the observation history, a semantic analysis of the objects (including the opponents) and predictions are made. The agent decides on a behavior based on its agent and world models, the selected team strategy and the assigned role for itself. Walking engine module determines how the agent should walk given a target position and an orientation. Behaviour module is responsible for executing certain commands like move-to-target or dribble-to-target. Commands for the corresponding behavior is sent to the Server Layer to generate the desired effect. Simultaneously, either informative or query messages might be sent to teammates based on the selected team role. 3 Motions In our earlier implementation, we used the Partial Fourier Series (PFS) model [2] for motions in coronal plane [3]. This implementation includes seven main body motion types and special transition functions for smoothly switching between two arbitrary motion types. Although robots are able to walk fast enough with this model, transitions cause delays in reactive behaviors. Therefore, we have shifted our focus on implementing omnidirectional motions using the motion model given in [4]. The new implementation will include static and dynamic motions. Static motions include standing up or searching the ball with head as predefined motions. On the other hand, dynamic motion patterns are generated dynamically depending on a target. These correspond to different types of walks, turns and alignments. We re planning to optimize the parameters with an optimization algorithm such as Covariance Matrix Adaptation Evolution Strategy (CMA-ES) [5] that is almost a parameter-free algorithm. 4 Behaviors There is a layered motion selection architecture for motions, behaviors, plans and cases in the team strategy. Modularity and ease of maintenance could be achieved by using such a hierarchy. While motions operate on the lowest level, behaviors are constructed as sequences of low-level motions. Some behavior examples include move-to-target, turn-to-target and dribble-to-target. On top of these behaviors, static plans are constructed in the form of behavior sequences. The team strategy component uses these plans in a modular way. A sample instantiation of a plan is given in Figure 2. 5 Team Strategy and Planning Team strategy involves four sequential processes to determine the target of an agent [6]. Figure 3 presents the main modules for the team strategy. Initially

attacker attacker plans dribble-to-goal behaviors move-to-target dribble-to-target kick-to-goal turn in diagonal walk motions turn out turn a- ball side walk forward walk backward walk dribble stop Fig. 2. A sample instantiation of the dribble-to-goal plan. two groups, namely attackers and defenders, are formed by using a group formation strategy. The role of each agent is determined based on these groups. The attackers group involves the forward and the midfielder agents while the defenders group involves only the defender agents. There are three different planners designed for four different roles. Forward planner uses a finite state machine (FSM) that moves to the ball regardless of the teammates and chooses an appropriate action from kick and dribble behaviors. Goalkeeper planner always stays in team s penalty area and tries to form an obstacle against opponents to prevent them from scoring. Final planner is a FSM both used by defenders and midfielders. Only difference in the FSM for both roles is the target calculation. Based on the assigned role of an agent, the corresponding planner is activated. A Fig. 3. General structure of the team strategy.

case-based group formation method is used to determine the number of defender agents and midfielder agents dynamically. Since two agents are assigned to the goalkeeper and the forward roles, the remaining seven agents are to be assigned to these roles. Instead of using a predetermined number for these roles, a casebased method is applied to determine the best separation. Equation 1 shows the information that is considered in selecting a case. Case = (BallP os., Score, AgentP os., N umberof midf., N umberof def.) (1) The midfielder and defender agents need to position themselves for maintaining Fig. 4. Step-by-step calculation of the Voronoi cell for an agent (a 2 ). close proximity to the forward agent and defending the goal respectively. This is accomplished by a distributed Voronoi cell construction approach in which each agent calculates its own cell independently from that of the others. Therefore, every agent has a differently shaped cell and these can overlap. The time complexity of the method is O(n 2 ) where n is the number of agents in the team. Figure 4 illustrates the iterations for calculating the final cell and the corresponding target as the center of this cell for agent #2 (a 2 ), which is a midfielder and draws its initial cell according to the ball position. As mentioned before, only teammates in the viewpoint of the agent are considered. The area that is

out of a 2 s point of view is shown as the shaded area. Figure 4 (a) shows the initial cell construction by considering the ball position (P B ). In Figure 4 (b), (c), and (d), the cell is modified according to the locations of a 5, a 6, a 8 and a 9, respectively. The line for a 9 doesn t have any intersection points with the current cell, so it doesn t make any changes in the cell. The final Voronoi cell of a 2 is shown with the red frame and the center of that cell is marked with a red point in Figure 4 (d). The method is used for both midfielders and defenders. Defenders create their cells with the same algorithm, but their initial cell is calculated according to the midpoint of the line connecting the ball position and the center of the team s goal position while midfielders use the ball location. After constructing the cell for itself, each agent determines the center of the cell as its new target. Agents become closer to each other by using this strategy, which is more beneficial for attacking in soccer. 6 Conclusion This team description report outlines different parts of the developed software system for beestanbul robots. The beestanbul is an ongoing project and the low-level modules of the architecture were designed and implemented. Current work includes designing the omnidirectional motions. The future focus will be on the use of efficient team coordination strategies with the designed motions and the integration of learning capabilities for robots. References 1. Sariel-Talay, S., Balch, T.R., Erdogan, N.: A generic framework for distributed multirobot cooperation. Journal of Intelligent and Robotic Systems 63(2) (2011) 323 358 2. Asta, S., Sariel-Talay, S.: Nature-inspired optimization for biped robot locomotion and gait planning. In: Proceedings of the 6th European Event on Nature-inspired Techniques in Scheduling, Planning and Timetabling. evostim (2011) 434 443 3. Picado, H., Gestal, M., Lau, N., Reis, L.P., Tomé, A.M.: Automatic generation of biped walk behavior using genetic algorithms. In: Proceedings of the 10th International Work-Conference on Artificial Neural Networks: Part I: Bio-Inspired Systems: Computational and Ambient Intelligence. IWANN 09, Berlin, Heidelberg, Springer-Verlag (2009) 805 812 4. Graf, C., Rfer, T.: A closed-loop 3d-lipm gait for the robocup standard platform league humanoid. In Zhou, C., Pagello, E., Behnke, S., Menegatti, E., Rfer, T., Stone, P., eds.: Proceedings of the Fourth Workshop on Humanoid Soccer Robots, IEEE-RAS International Conference on Humanoid Robots (Humanoids-10). (2010) 5. Igel, C., Hansen, N., Roth, S.: Covariance matrix adaptation for multi-objective optimization. Evolutionary Computation 15(1) (2007) 1 28 6. Ulusoy, O., Sariel-Talay, S.: Distributed team formation for humanoid robot soccer. In: Proceedings of the 4th International Conference on Agents and Artificial Intelligence, Vilamoura, Algarve, Portugal (2012)